[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
A Defect Detection Method Based on BC-YOLO for Transmission Line Components in UAV Remote Sensing Images
Next Article in Special Issue
Estimation of Potato Above-Ground Biomass Based on Vegetation Indices and Green-Edge Parameters Obtained from UAVs
Previous Article in Journal
A Novel Low-Cost GNSS Solution for the Real-Time Deformation Monitoring of Cable Saddle Pushing: A Case Study of Guojiatuo Suspension Bridge
Previous Article in Special Issue
Estimation of Aboveground Biomass of Potatoes Based on Characteristic Variables Extracted from UAV Hyperspectral Imagery
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Building Extraction and Floor Area Estimation at the Village Level in Rural China Via a Comprehensive Method Integrating UAV Photogrammetry and the Novel EDSANet

1
Institute of Geology, China Earthquake Administration, Beijing 100029, China
2
Key Laboratory of Seismic and Volcanic Hazards, China Earthquake Administration, Beijing 100029, China
3
School of Surveying and Geo-Informatics, Shandong Jianzhu University, Jinan 250101, China
4
College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
5
School of Earth and Environmental Sciences, The University of Queensland, Brisbane, QLD 4072, Australia
6
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(20), 5175; https://doi.org/10.3390/rs14205175
Submission received: 4 September 2022 / Revised: 12 October 2022 / Accepted: 13 October 2022 / Published: 16 October 2022
(This article belongs to the Special Issue Recent Progress in UAV-AI Remote Sensing)
Figure 1
<p>The geographical location of the study area.</p> ">
Figure 2
<p>Flight route map in the research area.</p> ">
Figure 3
<p>Flowchart of building extraction and floor area estimation in this research.</p> ">
Figure 4
<p>The architecture of the EDSANet model consists of two parts: the semantic encoding branch and the spatial information encoding branch. (<b>a</b>) Spatial information encoding module, (<b>b</b>) semantic encoding module, (<b>c</b>) feature fusion module, (<b>d</b>) dual attention module, and (<b>e</b>) attention feature refinement module.</p> ">
Figure 5
<p>The architecture of the dual attention module consists of two branches: the kernel attention module and channel attention module. (<b>a</b>) Kernel attention module, and (<b>b</b>) channel attention module.</p> ">
Figure 6
<p>An example of data augmentation by rotating and flipping the rural Weinan building dataset.</p> ">
Figure 7
<p>Changes in accuracy and loss for the EDSANet model in the training process.</p> ">
Figure 8
<p>Flowchart of building height and floor area estimation.</p> ">
Figure 9
<p>Building extraction results of different deep learning models with the rural Weinan building dataset. (<b>a</b>–<b>d</b>) Four images randomly selected to show the test results. SegNet, UNet, Deeplabv3+, Ags-Unet, MAP-Net, ARC-Net, and EDSANet, respectively, are represented by the building extraction results from the four groups of comparison experiments. Green represents the buildings and black represents the background. In the ground truth, red represents the buildings and black represents the background.</p> ">
Figure 9 Cont.
<p>Building extraction results of different deep learning models with the rural Weinan building dataset. (<b>a</b>–<b>d</b>) Four images randomly selected to show the test results. SegNet, UNet, Deeplabv3+, Ags-Unet, MAP-Net, ARC-Net, and EDSANet, respectively, are represented by the building extraction results from the four groups of comparison experiments. Green represents the buildings and black represents the background. In the ground truth, red represents the buildings and black represents the background.</p> ">
Figure 10
<p>Spatial distribution of rural buildings in Helan village. (<b>a</b>) Ground truth of homesteads and (<b>b</b>) identification results based on the EDSANet model.</p> ">
Figure 11
<p>UAV-based estimation of the number of floors in rural buildings. (<b>a</b>) The DSM based on the photogrammetry workflow with the overlapping UAV images, (<b>b</b>) the DTM based on the point cloud filtering algorithm with the DSM images, and (<b>c</b>) the DTM subtracted from the DSM to create the nDSM.</p> ">
Figure 12
<p>Frequency distribution diagram of the nDSM pixel values.</p> ">
Figure 13
<p>Classification results of building floors with nDSM.</p> ">
Figure 14
<p>Frequency distribution diagram of building heights.</p> ">
Figure 15
<p>Example of extracted results from ablation experiment with the Weinan building dataset. (<b>a</b>) The input images, (<b>b</b>) results extracted with the proposed EDSANet model, (<b>c</b>) EDSANet without DAM, and (<b>d</b>) EDSANet without AFRM.</p> ">
Versions Notes

Abstract

:
Dynamic monitoring of building environments is essential for observing rural land changes and socio-economic development, especially in agricultural countries, such as China. Rapid and accurate building extraction and floor area estimation at the village level are vital for the overall planning of rural development and intensive land use and the “beautiful countryside” construction policy in China. Traditional in situ field surveys are an effective way to collect building information but are time-consuming and labor-intensive. Moreover, rural buildings are usually covered by vegetation and trees, leading to incomplete boundaries. This paper proposes a comprehensive method to perform village-level homestead area estimation by combining unmanned aerial vehicle (UAV) photogrammetry and deep learning technology. First, to tackle the problem of complex surface feature scenes in remote sensing images, we proposed a novel Efficient Deep-wise Spatial Attention Network (EDSANet), which uses dual attention extraction and attention feature refinement to aggregate multi-level semantics and enhance the accuracy of building extraction, especially for high-spatial-resolution imagery. Qualitative and quantitative experiments were conducted with the newly built dataset (named the rural Weinan building dataset) with different deep learning networks to examine the performance of the EDSANet model in the task of rural building extraction. Then, the number of floors of each building was estimated using the normalized digital surface model (nDSM) generated from UAV oblique photogrammetry. The floor area of the entire village was rapidly calculated by multiplying the area of each building in the village by the number of floors. The case study was conducted in Helan village, Shannxi province, China. The results show that the overall accuracy of the building extraction from UAV images with the EDSANet model was 0.939 and that the precision reached 0.949. The buildings in Helan village primarily have two stories, and their total floor area is 3.1 × 105 m2. The field survey results verified that the accuracy of the nDSM model was 0.94; the RMSE was 0.243. The proposed workflow and experimental results highlight the potential of UAV oblique photogrammetry and deep learning for rapid and efficient village-level building extraction and floor area estimation in China, as well as worldwide.

1. Introduction

Homesteads are an important part of basic rural geographic information and multifunctional complex spaces for rural residents [1,2,3]. With the advancement of urban–rural economic integration in China, many farmers have migrated to cities. From 2000 to 2016, the rural resident population in China decreased from 808 to 589 million (a decline of 27.1%) [4]. The migration of rural residents to cities reduces the area of rural homestead land. However, due to the free acquisition and use of the homestead system, local governments launch new rural construction without proper scientific planning, which has increased the area of idle rural homesteads by 20.6% [5], from 0.99 to 1.21 million km2 [4]. Compared to developed cities, rural areas are dominated by low-rise buildings, and the excessive occupation of land resources by farmers affects land-use efficiency [6]. To promote rural development, the Chinese government has proposed “beautiful countryside” construction. In-depth investigations should be conducted on the living conditions of farmers, and land-use areas in rural areas should be rationally planned. Field surveys can provide accurate information about the residents of farms but require time and labor. Moreover, land use for rural homesteads in developing countries is usually scattered, resulting in barriers to the acquisition of rural building information. Therefore, additional methods should be proposed to quickly and accurately extract building information and estimate floor area in rural environments.
To ameliorate adverse social problems, building density regulations (such as those for building heights or floor area ratios) are common practices in urban planning and management worldwide [6]. Various remote sensing products and classification methods have been used to extract building coverage areas [7,8] and building heights [9,10]; the nDSM [11] (the difference between a DSM and a digital terrain models (DTM)) is widely used in height estimation [12]. Ji and Tang [13] proposed three methods for gross floor area estimation from monocular optical imagery using the NoS R-CNN model. Given the densely populated villages and scattered land-use layout in China, UAVs have become the latest trend in rural homestead detection because of their flexibility, low cost, real-time results, and high resolution [14]. Nyaruhuma et al. [15] used oblique photogrammetry to reconstruct 3D buildings on an urban scale. High-resolution UAV images can obtain sufficiently detailed information and provide new challenges to existing methods of building extraction [16,17]. Previous studies have primarily focused on the extraction of architectural features based on machine learning, including maximum likelihood classification [18], support vector machines [19], and object-based classification methods [20]. However, machine learning algorithms based on feature extraction rely heavily on manual parameter setting and expert knowledge, which usually leads to poor generalization with different environmental backgrounds [21,22]. In rural areas with more complex surface compositions, the use of traditional algorithms for ground object classification can be improved further [23].
Owing to the complexity of image backgrounds and the semantic texture of buildings, automatic and high-precision building extraction from UAV images presents uncertainty [24,25]. Recently, scholars have employed deep learning technology to identify building contour information [26,27,28,29]. Long et al. proposed the FCN model for pixel-level semantic segmentation, which is the first end-to-end fully convolutional network that accepts any size input for image segmentation, and it has successfully led to a new wave of semantic segmentation tasks [30]. Subsequently, many variant FCN-based models have improved the feature expression capabilities to obtain better experimental results (such as SegNet [31], U-Net [32], and ERFNet [33]). Liu et al. [34] proposed a novel convolutional neural network combined encoder–decoder and spatial pyramid pooling module named USPP for building extraction from high-resolution remote sensing images. Konstantinidis et al. [35] proposed a modular CNN to improve the performance of building detectors by employing a histogram of oriented gradients and local binary patterns in a remote sensing dataset. Zhang [36] developed a method for estimating homestead areas based on UAV images and the U-Net algorithm. The results demonstrate that, in rural areas with complex surface compositions, the deep learning method can achieve fast, stable, and high-precision results. Liao et al. [37] proposed a boundary-preserved model that works by jointly learning the contours and structures of buildings. Experiments on the WHU, Aerial, and Massachusetts Building Datasets showed that the proposed model outperformed other state-of-the-art methods. Xiao et al. [38] proposed a shifted-window transformer-based encoding booster to capture the semantic information of large buildings in high-resolution remote sensing images. Li et al. [39] proposed a novel end-to-end network integrating lightweight spatial and channel attention modules to refine features adaptively for building extraction tasks. Wei et al. [40] proposed a multi-branch network for the extraction of rural homesteads based on aerial images. Jing et al. [41] proposed an efficient memory module to enhance the learning ability of deep learning models in building extraction. Li et al. [42] proposed a global style and local matching contrastive learning model for image-level and pixel-level representation. However, most existing deep learning models focus on stacking complex architectures and parameter settings to improve accuracy, which also has disadvantages, such as requiring extensive calculations and slow iteration speed [43]. Moreover, in the extraction of comprehensive building information, high-resolution remote sensing images cannot directly identify the numbers of floors in homesteads. The use of remote sensing data with high spatiotemporal resolution to estimate the area of village-level homesteads at the pixel level still remains challenging. Comprehensive methods and models should be combined with building extraction and floor area estimation at the village level.
Here, we propose a comprehensive method for building extraction and floor area estimation of village-level homesteads by combining UAV oblique photogrammetry and deep learning technology. First, the footprint of buildings is identified using the novel EDSANet model proposed, which employs dual attention extraction and attention feature refinement to enhance the accuracy of building extraction. Then, the number of floors of each building is estimated using the nDSM generated from UAV remote sensing. The total floor area of the homestead is rapidly calculated by multiplying the floor area of each building by the number of floors. A case study was conducted in Helan village, Shaanxi province, China. The experiments demonstrate that the proposed method can achieve rapid and low-cost results in building extraction and floor area estimation in rural villages. To summarize, the main contributions of this paper are as follows:
(1)
We propose a comprehensive method combining UAV oblique photogrammetry and deep learning technology for building extraction and floor area estimation of village-level homesteads. A novel EDSANet model is proposed to tackle the problem of complex surface feature scenes in remote sensing images and improve performance in building extraction;
(2)
We designed a semantic encoding module by applying three down-sample stages (with atrous convolution) to enlarge the receptive field and a spatial information encoding module with only six layers and three stages using one eighth of the original input to enrich spatial details and improve the accuracy in building extraction;
(3)
A dual attention module is proposed to extract useful information from the kernel and channel, respectively. To adjust the excessive convergence of building feature information after attention extraction, we propose an attention feature refinement module to further improve the extraction effect of the model for useful features by redefining the attention features, thereby improving the accuracy.
The remainder of this paper is organized as follows: Section 2 describes the study area and data. Section 3 presents deep learning methods for building extraction and the UAV oblique photogrammetry method for floor area estimation. Section 4 introduces the results of the building extraction and floor area estimation. The discussion and conclusions are presented in Section 5 and Section 6, respectively.

2. Study Area and Data

2.1. Study Area

Weinan City is located in Shaanxi province, China, from 34°13′E to 35°52′E and 108°50′N to 110°38′N. According to the 2017 census, the total population of the city is approximately 5.38 million, and it has an area of 13,134 km2. Since 2018, Shaanxi province has vigorously promoted rural innovation and reform and accelerated the implementation of the rural revitalization strategy. The pilot reform project in Weinan city achieved remarkable success. Based on a field survey, this research selected Helan village, Fuping county, Weinan city, Shaanxi province as the research area. The village is located between the Guanzhong Plain and the northern Shaanxi Plateau. The village has an area of 3.88 × 104 m2 with 205 households (of which 151 are residents) and a registered population of 321. The buildings are densely distributed in the research area, the village roads are planted with regular arbor forests, and parts of the homesteads are shaded by tall trees or shrubs. An overview of the study area is presented in Figure 1.

2.2. UAV Data

The experimental data utilized in this research were tokens from a small four-rotor unmanned aerial vehicle (UAV). The drone model was an INSPIRE 2 (Shenzhen DJI Innovation Technology Co., Ltd., Shenzhen, China) equipped with a Zenmuse X5s HD camera, an effective pixel count of 16 million for the four thirds CMOS, and a built-in optical imaging lens camera composed of nine glass sheets in seven groups. The UAV was equipped with GPS and GLONASS dual satellite navigation systems, which can be used to autonomously plan the flight path in a study area. Table 1 presents detailed information on the UAV equipment.
A warm, clear, and windless day (2 August 2018) was chosen to ensure stability for the UAV photography. The flight track ranged from 108°50′E to 110°38′E and 34°13′N to 35°52′N (Figure 2). The flight route was from the southeast corner to the northwest corner of the study area, and pictures were taken along the S route. To construct photogrammetric stereo pairs, the two adjacent images were set with an 85% heading overlap and 75% inside overlap. The spatial resolution of the UAV data reached 2.3 cm.

3. Methodology

Figure 3 illustrates the detailed workflow, which includes seven principal steps. The first and second steps consisted of obtaining the orthophoto of the research area from the aerial UAV images. The orthophoto of the research area and the building sample dataset were produced through data preprocessing and augmentation. The proposed EDSANet model was then used to extract the building footprint of the study area, the accuracy was evaluated using five metrics, and the segmented images were merged into an entire image. Based on the UAV point cloud data, the tilt photogrammetry method was applied to generate the DSM, DTM, and nDSM to determine the building height. Lastly, the floor area of the homesteads in the study area was calculated based on the building footprints and the numbers of floors.

3.1. Methodology

3.1.1. EDSANet Architecture

We propose a novel fully connected network named the Efficient Deep-wise Spatial Attention Network (EDSANet) to tackle the problem of complex surface feature scenes in remote sensing images and improve the efficiency and accuracy of building extraction tasks. Figure 4 shows an overview of the EDSANet architecture, including two branch networks composed of four units. (1) We first designed a semantic encoding module (SEM, Figure 4b), which employs channel splitting and shuffling to reduce computation and maintain higher segmentation accuracy. (2) A dual attention module (DAM, Figure 4d), consisting of spatial attention and channel attention, and an attention feature refinement module (AFRM, Figure 4e) were designed to make full use of the multi-level feature maps simultaneously, which helps predict the pixel-wise labels in each stage. (3) A spatial information encoding module (SIEM, Figure 4a) was used to enhance spatial semantic information and preserve spatial details. (4) We developed a simple feature fusion module (FFM, Figure 4c) to better aggregate the context information and spatial information [44].
First, input images are fed into the SEM to generate four feature maps (Fh,1, Fh,2, Fh,3, Fh,4) with decreasing spatial resolution. The feature maps Fh,3 and Fh,4 have the same numbers of channels, with different dilation rates, to enlarge the receptive field convolutional filters. Then, inspired by the efficiency of dilated convolution [45], we adopted a one-eighth down-sample strategy. As Equation (1) shows, the final segmentation FFMh,s is obtained by combining the high-resolution feature map Fh with the spatial feature map Fs from SIEM.
F F M h , s = F u p ( c o n v ( [ F h ,   F s ] ) ) ,

3.1.2. Semantic Encoding Module (SEM)

This building block was designed with inspiration from lightweight image classification model strategies, such as in Ma et al. [46], Zhang et al. [47], and Sandler et al. [48]. The models mentioned above set the ratio of the input image resolution by applying five down-samplings and the size of the final output is only 1/32 of the input image size, which can lead to a significant loss in the spatial details. As Table 2 shows, our proposed SEM is based on this building block and applies three down-samplings (the output resolution is only one eighth of the original image resolution with 32, 64, and 128 channels). In stages three and four, atrous convolution is introduced to increase the receptive field.

3.1.3. Spatial Information Encoding Module (SIEM)

To improve the performance of semantic segmentation, the model aimed to effectively combine high-level semantics and low-level details. As the SEM was not designed for spatial details or low-level information, in the shallow SIEM, which has only six layers and three stages, each layer consists of a convolution operation (Conv), batch normalization (BN), and a parametric rectified linear unit (PReLU) [49]. The first and second layers of each stage have the same number of filters (stride of 2) and output feature map size. Therefore, one eighth of the original input is extracted by the SIEM, which enriches the spatial details due to the high channel capacity.

3.1.4. Dual Attention Module (DAM)

For the spatial dimension, we designed an attention mechanism based on kernel attention named the kernel attention module (KAM). For the channel dimension, the number of input channels C is normally far less than the number of pixels contained in the feature maps (i.e., CN). Therefore, the complexity of the Softmax function for channels is not high. Thus, we utilized a channel attention mechanism based on the dot-product [50] named the channel attention module (CAM). As Figure 5 shows, using the KAM, which models the long-range dependencies of positions, and CAM, which models the long-range dependencies of channels, we designed the dual attention module (DAM) to enhance the discriminative ability of the feature maps extracted by each layer.

3.1.5. Deep Supervision

As providing supervision to the hidden layer reduces classification errors [21], researchers have adopted similar strategies [51] to ease the loss propagation in shallow layers. Therefore, we adopted auxiliary losses (Equation (2)) in stages two to four to supervise the predictions:
L t = α L f β i = 1 n L i ,
where α and β are the weights of the main loss function and auxiliary loss, with both weights set to 1; L t is the total loss; L f represents the loss for the output layer; and L i represents the loss of the j-th stage after applying dual attention and feature refinement.

3.1.6. Loss Function

The loss function has an essential impact on the model accuracy and, usually, the most suitable loss function depends on the data properties and the class definitions [28]. Cross-entropy loss is a widely used loss function in two-dimensional semantic segmentation tasks. The aim of the learning-based remote sensing building extraction task is to train a binary classifier. The positive samples are pixels representing the buildings, whereas the negative samples are pixels containing the background. We here employed binary cross-entropy loss (Equation(3)) [52] in the training process:
H p ( q ) = 1 N i = 1 N y i · l o g ( p ( y i ) ) + ( 1 y i ) · l o g ( 1 p ( y i ) ) ,
where y is the label (1 for green points and 0 for red points) and p ( y i ) is the predicted probability of the point being green for all N points.

3.2. Data Preprocessing

The data preprocessing in the deep learning technology primarily consists of image clipping and image labeling. Building segmentation is a binary classification task involving buildings and non-building elements [21]. The building samples were intended to contain various types of buildings in the study area. The building labels were manually completed in ArcGIS 10.2. The pixel values of each image were scaled to the interval [0,1] by dividing by 255. To facilitate the deep learning calculation, the original image was uniformly cropped to generate 256 × 256 pixels with an overlap of 56 pixels between two adjacent images.
Data augmentation is an effective way to enlarge a dataset and avoid overfitting [53]. As presented in Figure 6, the images were rotated by 90°, 180°, and 270°. Random horizontal and vertical flipping were performed with a probability of 0.5. After data augmentation, 4980 images with 256 × 256 pixels were generated. The spatial resolution of these images was about 2.3 to 5.3 cm. A total of 30% of the images were randomly selected as the test set, while the rest of the images were the training set. The final results of the building extraction were obtained by further applying a threshold of 0.5. No additional post-processing was performed in this study.

3.3. Experimental Setting

The experiments were conducted using the PyTorch deep learning framework. All experiments were conducted on servers with 12th Gen Intel(R) Core™ i9-12900KF (3.20 GHz) and NVIDIA GeForce RTX 3090 (24 GB). All deep learning models were trained for 100 epochs, and 16 batches were randomly selected as the input data. The Adam optimizer was applied with an initial learning rate of 0.0001 and a weight decay of 0.0001. Figure 7 presents the dynamic changes in the accuracy and loss of the EDSANet model during the training process with the rural Weinan building dataset: the loss decreased and the accuracy increased as the training epochs increased; after the number of epochs reached 60, the model training tended to stabilize, and the accuracy remained high.

3.4. Evaluation Metrics

Five common evaluation metrics were employed for quality evaluation in this research: overall accuracy (OA) (Equation (4)), precision (Equation (5)), recall (Equation (6)), F1-score (F1) (Equation (7)), and intersection-over-union (IoU) (Equation (8)). The five metrics are calculated as follows:
O v e r a l l   A c c u r a c y = T P + T N T P + T N + F P + F N ,  
P r e c i s i o n = T P T P + F P ,
R e c a l l = T P T P + F N ,
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ,
I o U = T P T P + F P + F N ,
where P is the number of positive samples, N is the number of negative samples, TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives, and FN is the number of false negatives.

3.5. Building Height and Floor Area Estimation

Figure 8 shows the workflow for the building height and floor area estimation. The UAV images obtained from field surveys were first fed into the Pix4Dmapper software (version 4.5.6) [54]. This software contains three image processing steps: initial processing, point cloud and mesh, and DSM orthomosaic [55]. The DSM can be extracted from overlapping aerial images obtained with photogrammetry technology using location information stored in the header file of each aerial image from the UAV flight [56]. Based on the DSM, the point-cloud filtering algorithm with mathematical morphology was employed to identify whether the filter window was the ground point. Subsequently, the ground objects on the surface (including buildings, trees, and other non-ground points) were eliminated. The DTM data, which represent the terrain elevation information, were then formed [57,58,59]. The difference between the DSM and the DTM is referred to as the nDSM, which is widely used in height estimation. The height of the rural elements above the terrain was then generated [60]. Based on field surveys of the usual heights of local buildings, a threshold was set to estimate the number of floors in rural buildings. Using the footprint area of the buildings identified by the EDSANet model, the numbers of floors of each building in the nDSM were extracted and estimated in the ArcGIS environment (version 10.2). Lastly, the total floor area of the homesteads in the study area was obtained by counting the construction areas of each floor. The formula is as follows:
A r e a floors = i = 1 f l o o r s m a x A r e a g i r d × N i ,
where Areafloors is the total floor area of the homesteads, Areagrid is the area of the grid resolution of nDSM, N is the number of floors, and i ranges from 1 to floorsmax for each building.

4. Results

4.1. Building Extraction Using Deep Learning Models

Five classic and state-of-the-art deep learning models, including SegNet [31], UNet [30,32], Deeplabv3+ [61], MAP-Net [62], ARC-Net [23], and AGs-Unet [21], were compared to verify the performance and efficiency of the proposed EDSANet model with the rural Weinan building dataset. Figure 9 presents the qualitative results of building extraction using different deep learning models. SegNet returned too many false positives and false negatives exhibiting the worst performance with the dataset. Deeplabv3+, MAP-Net, and AGs-Unet presented quite similar performances in building extraction. For the proposed EDSANet model, the building segmentation results were satisfactory, and most buildings were generally well-segmented regardless of the type of roof (e.g., colored steel tile or sloped tile); additionally, the building footprints were very clear. However, the deep learning model could not clearly separate the boundaries between households in connected buildings (Figure 9a,b).
The specific analysis of the figure is as follows: Columns (a)–(d) represent four images randomly selected to show the test results. In (a) and (b), the proposed EDSANet model achieved effective completeness in extracted results for the whole single building. In the second column of buildings in (c), compared with Ags-Unet and ARC-Net, EDSANet clearly extracted the boundary of the buildings and showed the distinct gap between the buildings. Moreover, EDSANet was more advanced in the expression of the surrounding details of the building gap than Unet, MAP-Net, and Deeplabv3+, as shown in the lower right corner of the image building extraction results in (d), but it was not as good as the boundary smoothness that the ARC-Net model achieved.
Table 3 presents the quantitative results of the building segmentation with the rural Weinan building dataset. SegNet obtained an overall accuracy of 0.740, while other models were all above 0.80. ARC-Net obtained an overall accuracy of 0.929 with a precision of 0.876, while EDSANet obtained an overall accuracy of 0.939 with an IoU of 0.848. In the experiments with the rural Weinan dataset, our proposed EDSANet model better balanced efficiency and accuracy compared to the MAP-Net and the ARC-Net models and achieved optimality for four evaluation metrics but not for recall, where Deeplabv3+ held the highest score of 0.946. Both the qualitative and quantitative experiment results demonstrate that EDSANet can effectively extract and fuse the features of rural buildings, improving the extraction accuracy for rural buildings. The results of the building extraction using the EDSANet model in Helan village are presented in Figure 10.

4.2. Building Height Estimation

Figure 11a shows the DSM extracted from the overlapping aerial images using photogrammetry technology. The DTM, based on morphological filtering, was utilized to obtain the ground area in the DSM (Figure 11b). The pixel values of the nDSM represent the height of the rural elements above the terrain (Figure 11c) and were calculated using the difference between the DSM and the DTM. The enclosed building area and the vegetation area on the ground cannot be correctly distinguished based only on the difference in height data. In the extraction of ground objects from high-resolution remote sensing data, the building segmentation precision obtained from the combination of spectral and height information is generally higher than that obtained using only spectral information or only height information.
The frequency distribution of the nDSM pixel values (Figure 12) was then calculated to obtain the building height information. Two pixel-value peaks were distributed near the 0.3 m and 4 m height differences. The pixel value of 0.3 m represents farmland crops and country roads, and the height difference of 4 m primarily represents the height of the roofs of one-story buildings or the walls of courtyards. All pixels in the nDSM grid with values less than 0.3 m were removed to avoid interference when extracting the building height. Moreover, because of the low reflectivity of the vegetation in the red band, the vegetation was well-extracted in the red band of the DOM image; the vegetation pixels in the nDSM were then detached with a raster operation in ArcGIS.
The building segmentation results of the deep learning method were a set of architectural and non-architectural images without a spatial reference. To facilitate the calculation and to display the results, the raster-based building footprint was converted into vector data after map projection in ArcGIS software to the coordinate system consistent with the reference image, which also made it possible to further calculate the floor area. Furthermore, the nDSM model of the interference pixels, including vegetation and roads, was removed with the mask of the homestead area. In accordance with the results of the field survey, the floors of the buildings in the study area were set at 4 m intervals. The height difference of 1–12 m was then set as indicating the first, second, and third floors (Table 4). In contrast, areas with floor heights of less than 1 m were set as indicating the courtyard height. Seventeen field survey buildings were randomly selected and utilized to examine the accuracy of the floor classification from the nDSM.
Figure 13 displays the classification results for the building floors; 16 buildings were correctly classified and the height of 1 building was overestimated. The accuracy of the floor classification from the nDSM was 0.94, and the RMSE was 0.243 (Table 5). Field verification showed that the abnormal point was a canopy built by residents. The canopy was low and easily covered by the tall arbor canopy on one side. Therefore, the canopy height was calculated as the height of the vegetation canopy.

4.3. Floor Area Estimation

The area of the homesteads was computed by multiplying the building area by the number of floors. We calculated the total construction area of the homesteads based on the reclassified results for the building heights from Section 4.2. The results showed that the total area of the homesteads was 3.1 × 105 m2. Specifically, the homestead area for one-floor buildings was approximately 1.14 × 105 m2 and accounted for 37.3% of the total homestead area; the homestead area for two-floor buildings with heights of 4–8 m was approximately 1.78 × 105 m2 and accounted for 58.2% of the total construction area of the homesteads; three-floor buildings with a height difference of more than 8 m had a homestead area of approximately 3.33 × 103 m2 and accounted for approximately 1.1% of the total construction area of the homesteads. The construction area for courtyards and low-rise shanty households accounted for only 3.4% of the total construction area. Figure 14 displays the frequency histogram for the building height; the average value of the pixels reached 4.45 m, and the standard deviation was 1.62. In conclusion, the average height and pixel frequency distribution indicated that residential buildings in the research area are primarily composed of two floors; this was consistent with the field survey results.

5. Discussion

5.1. Ablation Experiments

To further verify the feasibility of the DAM consisting of kernel attention and channel attention, the effectiveness of the atrous convolution in the SEM and the extraction precision of the different modules and fusion strategies were evaluated here in ablation experiments. The backbone model included the SIEM, SEM (without atrous convolution), and FFM. The benchmark models and strategies contained the backbone, the backbone + SEM (with atrous convolution), the backbone + DAM, the backbone + AFRM, and the backbone + SEM (with atrous convolution) + DAM + AFRM. Some of the ablation results for building extraction are presented in Figure 15. EDSANet (Figure 15b) showed the best performance in building extraction, with clear boundary identification compared to the model without DAM (Figure 15c) or AFRM (Figure 15d).
The quantitative comparison results for the different combinations are shown in Table 6. Based on the reference network as the backbone, both the dual attention module consisting of the kernel and the channel attention module and attention feature refinement module improved the representation ability for the features extracted from the network. Compared with the backbone, the accuracy was significantly improved. However, the recall performance of the backbone, at 0.907, was better than that of the backbone + SEM (atrous convolution), the backbone + DAM, and the backbone + AFRM, individually. It can be concluded from Table 4 that adding the DAM or AFRM modules reduced the accuracy of the model based on the backbone, and all the OA, precision, recall, F1, and IoU results for the backbone + SEM (atrous convolution) + DAM + AFRM network, with AFRM, were improved. This indicates that AFRM can adjust the excessive convergence of building feature information after attention extraction with DAM, thereby improving the accuracy of building extraction in remote sensing. The last row in Table 6 shows the results for the proposed EDSANet model, which achieved the best performance in all evaluation metrics except for recall.

5.2. Summaries and Limitations

Recent years have witnessed widespread application of deep learning in building extraction and other tasks owing to advancements in automatic learning features and strong adaptability. Previous studies have primarily focused on urban building extraction, which lacks application in rural China. In this study, we proposed the EDSANet model to extract buildings from UAV imagery in rural Weinan, China. The overall accuracy of the building extraction achieved by EDSANet was 0.929, and the precision was 0.876. Buildings were well-identified with clear boundaries regardless of the type of roof (e.g., colored steel tile or sloped tile). Buildings in rural areas mostly have one or two floors and are generally made of adobe, brickwood, and brick-concrete. The rural area selected in this research has fewer and more consistent building structure types than urban areas, which facilitates building extraction. However, for some irregularly arranged rural areas, the performance of building extraction with the EDSANet should be further analyzed.
Consumer-grade drones are flexible and have high spatial resolution, which can ensure the clear boundaries of buildings and the accuracy of the three-dimensional point cloud model. Series of 3D products based on UAV flight data, including DSM and DTM, were here generated using oblique photogrammetry technology. The nDSM was used to remove the vacant rural plots, and the heights of the buildings were extracted from the nDSM model. However, classifying different types of ground objects with complex spectral information from high-resolution UAV images is difficult. In addition, 17 buildings field-surveyed on the ground, accounting for 8.3% of buildings and covering all numbers of floors in this village, were randomly selected and employed to verify the classification results of building heights in this study. As mentioned in Section 4.2, the height of the building at one sample point was overestimated because the roof was covered by the vegetation canopy. The overall accuracy of the classification results was 0.94. As the property rights and structures of the rural buildings were investigated and confirmed in the field survey, the building height error was primarily due to the instability in the drone flight conditions and the overestimation of the roof height caused by trees. In future studies, we will adopt mathematical morphological methods to eliminate interference factors and to further optimize the accuracy of the building boundaries identified by deep learning methods and elevation extraction using UAV oblique photogrammetry.

6. Conclusions

Rapid and accurate building extraction and floor area estimation at the village level are of great significance for the overall planning of rural development and intensive land use. In this study, we proposed a comprehensive method to estimate village-level homestead areas by combining UAV remote sensing and deep learning technology. First, the building footprints were identified using the proposed EDSANet model, which merged dual attention extraction and attention feature refinement to aggregate multi-level semantics and enhance the performance of building extraction, especially for high-spatial-resolution images. Then, the number of floors of each building was estimated using the nDSM model generated from UAV oblique photogrammetry. The floor area of the entire village was estimated by multiplying the floor area of each building by the number of floors in the village. The case study was conducted in Helan village, Shaanxi province, China. The results show that the overall accuracy of the building extraction with the EDSANet model from UAV images was 0.929, with the precision reaching 0.876. The buildings in Helan village are primarily composed of two stories and have a total floor area of 3.1 × 105 m2. The field survey verified that the accuracy of the nDSM model was 0.94; the RMSE was 0.243. The experimental results demonstrate that the proposed workflow, combining UAV remote sensing and deep learning technology, can aid in rapid and efficient building extraction and floor area estimation at the village level in China, as well as worldwide.

Author Contributions

Conceptualization, J.Z. and Y.L.; methodology, J.Z.; software, X.C.; validation, X.Y.; formal analysis, X.Y.; investigation, X.Y.; resources, H.C.; data curation, H.C.; writing—original draft preparation, J.Z.; writing—review and editing, Y.L.; visualization, X.C. and L.G.; supervision, L.G.; project administration, Y.L. and G.N.; funding acquisition, Y.L. and G.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was jointly supported by the National Natural Science Foundation of China, grant numbers 42201077 and 42177453; the Natural Science Foundation of Shandong Province, grant number ZR2021QD074; the Shandong Top Talent Special Foundation; and the National Nonprofit Fundamental Research Grant of China, Institute of Geology, China Earthquake Administration, grant number IGCEA2106.

Data Availability Statement

The codes are available at: https://github.com/Avery1991/2022EDSANet (accessed on 4 September 2022).

Acknowledgments

We would like to thank the editors and the anonymous reviewers for their insightful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.; Li, Z.; Yang, J.; Li, H.; Liu, Y.; Fu, B.; Yang, F. Seismic vulnerability comparison between rural Weinan and other rural areas in Western China. Int. J. Disaster Risk Reduct. 2020, 48, 101576. [Google Scholar] [CrossRef]
  2. Liu, Y.; So, E.; Li, Z.; Su, G.; Gross, L.; Li, X.; Qi, W.; Yang, F.; Fu, B.; Yalikun, A.; et al. Scenario-based seismic vulnerability and hazard analyses to help direct disaster risk reduction in rural Weinan, China. Int. J. Disaster Risk Reduct. 2020, 48, 101577. [Google Scholar] [CrossRef]
  3. Zhu, Q.; Li, Z.; Zhang, Y.; Guan, Q. Building Extraction from High Spatial Resolution Remote Sensing Images via Multiscale-Aware and Segmentation-Prior Conditional Random Fields. Remote Sens. 2020, 12, 3983. [Google Scholar] [CrossRef]
  4. Liu, S.Y.; Xiong, X.F. Property rights and regulation: Evolution and reform of China’s homestead system. China Econ. Stud. 2019, 6, 17–27. [Google Scholar]
  5. Liu, Y.; Fang, F.; Li, Y. Key issues of land use in China and implications for policy making. Land Use Policy 2014, 40, 6–12. [Google Scholar] [CrossRef]
  6. Yu, B.; Liu, H.; Wu, J.; Hu, Y.; Zhang, L. Automated derivation of urban building density information using airborne LiDAR data and object-based method. Landsc. Urban Plan. 2010, 98, 210–219. [Google Scholar] [CrossRef]
  7. Liu, Y.; Zheng, X.; Ai, G.; Zhang, Y.; Zuo, Y. Generating a High-Precision True Digital Orthophoto Map Based on UAV Images. ISPRS Int. J. Geo Inf. 2018, 7, 333. [Google Scholar] [CrossRef] [Green Version]
  8. Allouche, M.K.; Moulin, B. Amalgamation in cartographic generalization using Kohonen’s feature nets. Int. J. Geogr. Inf. Sci. 2005, 19, 899–914. [Google Scholar] [CrossRef]
  9. Dandabathula, G.; Sitiraju, S.R.; Jha, C.S. Retrieval of building heights from ICESat-2 photon data and evaluation with field measurements. Environ. Res. Infrastruct. Sustain. 2021, 1, 011003. [Google Scholar] [CrossRef]
  10. Kamath, H.G.; Singh, M.; Magruder, L.A.; Yang, Z.-L.; Niyogi, D.J. GLOBUS: GLObal Building heights for Urban Studies. arXiv 2022, arXiv:2205.12224. [Google Scholar]
  11. Weidner, U.; Förstner, W. Towards automatic building extraction from high-resolution digital elevation models. ISPRS J. Photogramm. Remote Sens. 1995, 50, 38–49. [Google Scholar] [CrossRef]
  12. Sefercik, U.G.; Karakis, S.; Bayik, C.; Alkan, M.; Yastikli, N. Contribution of Normalized DSM to Automatic Building Extraction from HR Mono Optical Satellite Imagery. Eur. J. Remote Sens. 2014, 47, 575–591. [Google Scholar] [CrossRef]
  13. Ji, C.; Tang, H. Gross Floor Area Estimation from Monocular Optical Image Using the NoS R-CNN. Remote Sens. 2022, 14, 1567. [Google Scholar] [CrossRef]
  14. Toth, C.; Jozkow, G. Remote sensing platforms and sensors: A survey. Isprs J. Photogramm. Remote Sens. 2016, 115, 22–36. [Google Scholar] [CrossRef]
  15. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, J.Z.; Lin, Z.J.; Li, C.M.; Hong, Z.G. 3D Reconstruction of Buildings with Single UAV Image. Remote Sens. Inf. 2004, 4, 11–15. [Google Scholar]
  17. Ma, Y.; Wu, H.; Wang, L.; Huang, B.; Ranjan, R.; Zomaya, A.; Jie, W. Remote sensing big data computing: Challenges and opportunities. Futur. Gener. Comput. Syst. 2015, 51, 47–60. [Google Scholar] [CrossRef] [Green Version]
  18. Zhong, Y.; Ma, A.; Ong, Y.S.; Zhu, Z.; Zhang, L. Computational intelligence in optical remote sensing image processing. Appl. Soft Comput. 2018, 64, 75–93. [Google Scholar] [CrossRef]
  19. Meng, Y.; Peng, S. Object-Oriented Building Extraction from High-Resolution Imagery Based on Fuzzy SVM. In Proceedings of the 2009 International Conference on Information Engineering and Computer Science, Wuhan, China, 19–20 December 2009. [Google Scholar]
  20. Dahiya, S.; Garg, P.K.; Jat, M.K. Object Oriented Approach for Building Extraction from High Resolution Satellite Images. In Proceedings of the 2013 3rd IEEE International Advance Computing Conference (IACC), Ghaziabad, India, 22–23 February 2013. [Google Scholar]
  21. Yu, M.; Chen, X.; Zhang, W.; Liu, Y. AGs-Unet: Building Extraction Model for High Resolution Remote Sensing Images Based on Attention Gates U Network. Sensors 2022, 22, 2932. [Google Scholar] [CrossRef]
  22. Liu, Y.; Zhang, W.; Chen, X.; Yu, M.; Sun, Y.; Meng, F.; Fan, X. Landslide Detection of High-Resolution Satellite Images Using Asymmetric Dual-Channel Network. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 4091–4094. [Google Scholar] [CrossRef]
  23. Liu, Y.; Zhou, J.; Qi, W.; Li, X.; Gross, L.; Shao, Q.; Zhao, Z.; Ni, L.; Fan, X.; Li, Z. ARC-Net: An Efficient Network for Building Extraction From High-Resolution Aerial Images. IEEE Access 2020, 8, 154997–155010. [Google Scholar] [CrossRef]
  24. Boonpook, W.; Tan, Y.; Xu, B. Deep learning-based multi-feature semantic segmentation in building extraction from images of UAV photogrammetry. Int. J. Remote Sens. 2020, 42, 1–19. [Google Scholar] [CrossRef]
  25. Trevisiol, F.; Lambertini, A.; Franci, F.; Mandanici, E. An Object-Oriented Approach to the Classification of Roofing Materials Using Very High-Resolution Satellite Stereo-Pairs. Remote Sens. 2022, 14, 849. [Google Scholar] [CrossRef]
  26. Yuan, J. Learning Building Extraction in Aerial Scenes with Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2793–2798. [Google Scholar] [CrossRef] [PubMed]
  27. Vakalopoulou, M.; Karantzalos, K.; Komodakis, N.; Paragios, N. Building Detection in Very High Resolution Multispectral Data with Deep Learning Features. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015. [Google Scholar]
  28. Touzani, S.; Granderson, J. Open Data and Deep Semantic Segmentation for Automated Extraction of Building Footprints. Remote Sens. 2021, 13, 2578. [Google Scholar] [CrossRef]
  29. Chen, J.; Yuan, Z.; Peng, J.; Chen, L.; Huang, H.; Zhu, J.; Liu, Y.; Li, H. DASNet: Dual Attentive Fully Convolutional Siamese Networks for Change Detection in High-Resolution Satellite Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 1194–1206. [Google Scholar] [CrossRef]
  30. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  31. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  32. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  33. Romera, E.; Alvarez, J.M.; Bergasa, L.M.; Arroyo, R. ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation. IEEE Trans. Intell. Transp. Syst. 2017, 19, 263–272. [Google Scholar] [CrossRef]
  34. Liu, Y.; Gross, L.; Li, Z.; Li, X.; Fan, X.; Qi, W. Automatic Building Extraction on High-Resolution Remote Sensing Imagery Using Deep Convolutional Encoder-Decoder With Spatial Pyramid Pooling. IEEE Access 2019, 7, 128774–128786. [Google Scholar] [CrossRef]
  35. Konstantinidis, D.; Argyriou, V.; Stathaki, T.; Grammalidis, N. A modular CNN-based building detector for remote sensing images. Comput. Netw. 2020, 168, 107034. [Google Scholar] [CrossRef]
  36. Zhang, X. Village-Level Homestead and Building Floor Area Estimates Based on UAV Imagery and U-Net Algorithm. ISPRS Int. J. Geo-Inf. 2020, 9, 403. [Google Scholar] [CrossRef]
  37. Liao, C.; Hu, H.; Li, H.; Ge, X.; Chen, M.; Li, C.; Zhu, Q. Joint Learning of Contour and Structure for Boundary-Preserved Building Extraction. Remote Sens. 2021, 13, 1049. [Google Scholar] [CrossRef]
  38. Xiao, X.; Guo, W.; Chen, R.; Hui, Y.; Wang, J.; Zhao, H. A Swin Transformer-Based Encoding Booster Integrated in U-Shaped Network for Building Extraction. Remote Sens. 2022, 14, 2611. [Google Scholar] [CrossRef]
  39. Li, H.; Qiu, K.; Chen, L.; Mei, X.; Hong, L.; Tao, C. SCAttNet: Semantic Segmentation Network With Spatial and Channel Attention Mechanism for High-Resolution Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2021, 18, 905–909. [Google Scholar] [CrossRef]
  40. Wei, R.; Fan, B.; Wang, Y.; Zhou, A.; Zhao, Z. MBNet: Multi-Branch Network for Extraction of Rural Homesteads Based on Aerial Images. Remote Sens. 2022, 14, 2443. [Google Scholar] [CrossRef]
  41. Jing, W.; Lin, J.; Lu, H.; Chen, G.; Song, H. Learning holistic and discriminative features via an efficient external memory module for building extraction in remote sensing images. Build. Environ. 2022, 222, 109332. [Google Scholar] [CrossRef]
  42. Li, H.; Li, Y.; Zhang, G.; Liu, R.; Huang, H.; Zhu, Q.; Tao, C. Global and Local Contrastive Self-Supervised Learning for Semantic Segmentation of HR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5618014. [Google Scholar] [CrossRef]
  43. Lin, J.; Jing, W.; Song, H.; Chen, G. ESFNet: Efficient Network for Building Extraction from High-Resolution Aerial Images. IEEE Access 2019, 7, 54285–54294. [Google Scholar] [CrossRef]
  44. Elhassan, M.A.; Huang, C.; Yang, C.; Munea, T.L. DSANet: Dilated spatial attention for real-time semantic segmentation in urban street scenes. Expert Syst. Appl. 2021, 183, 115090. [Google Scholar] [CrossRef]
  45. Li, G.; Yun, I.; Kim, J.; Kim, J. Dabnet: Depth-wise asymmetric bottleneck for real-time semantic segmentation. arXiv 2019, arXiv:1907.11357. [Google Scholar]
  46. Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. Shufflenet v2: Practical Guidelines for Efficient Cnn Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
  47. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  48. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification. In Proceedings of the International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015; pp. 1026–1034. [Google Scholar]
  50. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
  51. Yu, M.; Zhang, W.; Chen, X.; Liu, Y.; Niu, J. An End-to-End Atrous Spatial Pyramid Pooling and Skip-Connections Generative Adversarial Segmentation Network for Building Extraction from High-Resolution Aerial Images. Appl. Sci. 2022, 12, 5151. [Google Scholar] [CrossRef]
  52. De Boer, P.-T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A Tutorial on the Cross-Entropy Method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
  53. Zhang, Z.; Wang, Y. JointNet: A Common Neural Network for Road and Building Extraction. Remote Sens. 2019, 11, 696. [Google Scholar] [CrossRef] [Green Version]
  54. Krause, S.; Sanders, T.G.M.; Mund, J.-P.; Greve, K. UAV-Based Photogrammetric Tree Height Measurement for Intensive Forest Monitoring. Remote Sens. 2019, 11, 758. [Google Scholar] [CrossRef]
  55. Kameyama, S.; Sugiura, K. Effects of Differences in Structure from Motion Software on Image Processing of Unmanned Aerial Vehicle Photography and Estimation of Crown Area and Tree Height in Forests. Remote Sens. 2021, 13, 626. [Google Scholar] [CrossRef]
  56. Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L. Model-based building detection from low-cost optical sensors onboard unmanned aerial vehicles. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-1/W4, 293–297. [Google Scholar] [CrossRef] [Green Version]
  57. Gevaert, C.; Persello, C.; Nex, F.; Vosselman, G. A deep learning approach to DTM extraction from imagery using rule-based training labels. ISPRS J. Photogramm. Remote Sens. 2018, 142, 106–123. [Google Scholar] [CrossRef]
  58. Özcan, A.H.; Ünsalan, C.; Reinartz, P. Ground filtering and DTM generation from DSM data using probabilistic voting and segmentation. Int. J. Remote Sens. 2018, 39, 2860–2883. [Google Scholar] [CrossRef]
  59. Serifoglu Yilmaz, C.; Gungor, O. Comparison of the performances of ground filtering algorithms and DTM generation from a UAV-based point cloud. Geocarto Int. 2018, 33, 522–537. [Google Scholar] [CrossRef]
  60. Shukla, A.; Jain, K. Automatic extraction of urban land information from unmanned aerial vehicle (UAV) data. Earth Sci. Inform. 2020, 13, 1225–1236. [Google Scholar] [CrossRef]
  61. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar]
  62. Zhu, Q.; Liao, C.; Hu, H.; Mei, X.; Li, H. MAP-Net: Multiple Attending Path Neural Network for Building Footprint Extraction from Remote Sensed Imagery. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 6169–6181. [Google Scholar] [CrossRef]
Figure 1. The geographical location of the study area.
Figure 1. The geographical location of the study area.
Remotesensing 14 05175 g001
Figure 2. Flight route map in the research area.
Figure 2. Flight route map in the research area.
Remotesensing 14 05175 g002
Figure 3. Flowchart of building extraction and floor area estimation in this research.
Figure 3. Flowchart of building extraction and floor area estimation in this research.
Remotesensing 14 05175 g003
Figure 4. The architecture of the EDSANet model consists of two parts: the semantic encoding branch and the spatial information encoding branch. (a) Spatial information encoding module, (b) semantic encoding module, (c) feature fusion module, (d) dual attention module, and (e) attention feature refinement module.
Figure 4. The architecture of the EDSANet model consists of two parts: the semantic encoding branch and the spatial information encoding branch. (a) Spatial information encoding module, (b) semantic encoding module, (c) feature fusion module, (d) dual attention module, and (e) attention feature refinement module.
Remotesensing 14 05175 g004
Figure 5. The architecture of the dual attention module consists of two branches: the kernel attention module and channel attention module. (a) Kernel attention module, and (b) channel attention module.
Figure 5. The architecture of the dual attention module consists of two branches: the kernel attention module and channel attention module. (a) Kernel attention module, and (b) channel attention module.
Remotesensing 14 05175 g005
Figure 6. An example of data augmentation by rotating and flipping the rural Weinan building dataset.
Figure 6. An example of data augmentation by rotating and flipping the rural Weinan building dataset.
Remotesensing 14 05175 g006
Figure 7. Changes in accuracy and loss for the EDSANet model in the training process.
Figure 7. Changes in accuracy and loss for the EDSANet model in the training process.
Remotesensing 14 05175 g007
Figure 8. Flowchart of building height and floor area estimation.
Figure 8. Flowchart of building height and floor area estimation.
Remotesensing 14 05175 g008
Figure 9. Building extraction results of different deep learning models with the rural Weinan building dataset. (ad) Four images randomly selected to show the test results. SegNet, UNet, Deeplabv3+, Ags-Unet, MAP-Net, ARC-Net, and EDSANet, respectively, are represented by the building extraction results from the four groups of comparison experiments. Green represents the buildings and black represents the background. In the ground truth, red represents the buildings and black represents the background.
Figure 9. Building extraction results of different deep learning models with the rural Weinan building dataset. (ad) Four images randomly selected to show the test results. SegNet, UNet, Deeplabv3+, Ags-Unet, MAP-Net, ARC-Net, and EDSANet, respectively, are represented by the building extraction results from the four groups of comparison experiments. Green represents the buildings and black represents the background. In the ground truth, red represents the buildings and black represents the background.
Remotesensing 14 05175 g009aRemotesensing 14 05175 g009b
Figure 10. Spatial distribution of rural buildings in Helan village. (a) Ground truth of homesteads and (b) identification results based on the EDSANet model.
Figure 10. Spatial distribution of rural buildings in Helan village. (a) Ground truth of homesteads and (b) identification results based on the EDSANet model.
Remotesensing 14 05175 g010
Figure 11. UAV-based estimation of the number of floors in rural buildings. (a) The DSM based on the photogrammetry workflow with the overlapping UAV images, (b) the DTM based on the point cloud filtering algorithm with the DSM images, and (c) the DTM subtracted from the DSM to create the nDSM.
Figure 11. UAV-based estimation of the number of floors in rural buildings. (a) The DSM based on the photogrammetry workflow with the overlapping UAV images, (b) the DTM based on the point cloud filtering algorithm with the DSM images, and (c) the DTM subtracted from the DSM to create the nDSM.
Remotesensing 14 05175 g011
Figure 12. Frequency distribution diagram of the nDSM pixel values.
Figure 12. Frequency distribution diagram of the nDSM pixel values.
Remotesensing 14 05175 g012
Figure 13. Classification results of building floors with nDSM.
Figure 13. Classification results of building floors with nDSM.
Remotesensing 14 05175 g013
Figure 14. Frequency distribution diagram of building heights.
Figure 14. Frequency distribution diagram of building heights.
Remotesensing 14 05175 g014
Figure 15. Example of extracted results from ablation experiment with the Weinan building dataset. (a) The input images, (b) results extracted with the proposed EDSANet model, (c) EDSANet without DAM, and (d) EDSANet without AFRM.
Figure 15. Example of extracted results from ablation experiment with the Weinan building dataset. (a) The input images, (b) results extracted with the proposed EDSANet model, (c) EDSANet without DAM, and (d) EDSANet without AFRM.
Remotesensing 14 05175 g015
Table 1. Detailed information on the UAV equipment.
Table 1. Detailed information on the UAV equipment.
ParametersValue
Takeoff Weight1280 g
Image Size4608 × 3456
Flight Duration27 min
Focal Length15 mm
Ground Sample Distance0.23 cm
Spectral Range0.38–0.76 μm
Working Temperature0–40°
Maximum Flight Altitude6000 m
Maximum Horizontal Flight Speed18 m/s
GPS ModuleGPS/GLONASS dual mode
Image Coordinate SystemWGS 84/UTM Zone 49N
UAV Flight PermissionNeeded
Table 2. SEM is used to extract high-level semantic information.
Table 2. SEM is used to extract high-level semantic information.
StageTypeFilters
Input
Stage 13 × 3 Conv32
Stage 2Down-sample64
Stage 3Down-sample128
Stage 4Building block128
Table 3. Building extraction results with rural Weinan building dataset using different CNN models.
Table 3. Building extraction results with rural Weinan building dataset using different CNN models.
ModelsOAPrecisionRecallF1IoU
SegNet0.7400.7590.6980.7230.568
UNet0.8760.7740.9390.8480.738
Deeplabv3+0.8990.8130.9460.8720.777
AGs-Unet0.9070.8640.9110.8870.798
MAP-Net0.9160.8770.8880.8910.799
ARC-Net0.9290.8760.9210.9020.822
EDSANet0.9390.9490.8870.9160.8481
1 Bold items in each column indicate the highest value.
Table 4. Classification rules for the vegetation and the number of building floors.
Table 4. Classification rules for the vegetation and the number of building floors.
ParameterThresholdClass
Brightness≤60Vegetation
Height≤1 mCourtyard
Height1 m ≤ nDSM ≤ 4 mOne floor
Height4 m ≤ nDSM ≤ 8 mTwo floors
Height8 m ≤ nDSM ≤ 12 mThree floors
Table 5. Confusion matrix for the number of floors divided by the nDSM model.
Table 5. Confusion matrix for the number of floors divided by the nDSM model.
Prediction
CourtyardCourtyardCourtyardCourtyard
ActualCourtyard1000
One floor0300
Two floors01110
Three floors0001
Table 6. Building extraction accuracy for modules and variants of the model.
Table 6. Building extraction accuracy for modules and variants of the model.
ModelsOAPrecisionRecallF1IoU
Backbone0.9110.8620.9070.8830.783
Backbone + SEM (atrous convolution)0.9050.8550.8890.8700.771
Backbone + DAM0.9060.8470.8990.8700.773
Backbone + AFRM0.9140.8780.8820.8790.787
Backbone + SEM (atrous convolution) + DAM + AFRM0.9390.9490.8870.9160.8481
1 Bold items in each column indicate the highest value.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, J.; Liu, Y.; Nie, G.; Cheng, H.; Yang, X.; Chen, X.; Gross, L. Building Extraction and Floor Area Estimation at the Village Level in Rural China Via a Comprehensive Method Integrating UAV Photogrammetry and the Novel EDSANet. Remote Sens. 2022, 14, 5175. https://doi.org/10.3390/rs14205175

AMA Style

Zhou J, Liu Y, Nie G, Cheng H, Yang X, Chen X, Gross L. Building Extraction and Floor Area Estimation at the Village Level in Rural China Via a Comprehensive Method Integrating UAV Photogrammetry and the Novel EDSANet. Remote Sensing. 2022; 14(20):5175. https://doi.org/10.3390/rs14205175

Chicago/Turabian Style

Zhou, Jie, Yaohui Liu, Gaozhong Nie, Hao Cheng, Xinyue Yang, Xiaoxian Chen, and Lutz Gross. 2022. "Building Extraction and Floor Area Estimation at the Village Level in Rural China Via a Comprehensive Method Integrating UAV Photogrammetry and the Novel EDSANet" Remote Sensing 14, no. 20: 5175. https://doi.org/10.3390/rs14205175

APA Style

Zhou, J., Liu, Y., Nie, G., Cheng, H., Yang, X., Chen, X., & Gross, L. (2022). Building Extraction and Floor Area Estimation at the Village Level in Rural China Via a Comprehensive Method Integrating UAV Photogrammetry and the Novel EDSANet. Remote Sensing, 14(20), 5175. https://doi.org/10.3390/rs14205175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop