Deep-Learning-Based Semantic Segmentation Approach for Point Clouds of Extra-High-Voltage Transmission Lines
<p>Network structure (“local feature aggregation” indicates local feature aggregation of point clouds after down-sampling in each layer of the model).</p> "> Figure 2
<p>The module of LFAPAD (yellow dots indicate the point of (<span class="html-italic">i</span> − 1)th layer, red dots indicate the point (<math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msubsup> <mi>p</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>∈</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mrow/> </msubsup> </mrow> </semantics></math>) of <span class="html-italic">i</span>th layer, and blue dot indicates the <span class="html-italic">m</span>th point (<math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>) in the point cloud of the <span class="html-italic">i</span>th layer; <math display="inline"><semantics> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mo> </mo> </mstyle> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>f</mi> </mstyle> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> denotes nearest neighbor point features; <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>r</mi> </mstyle> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> denotes the relative spatial positional information between points; <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mover accent="true"> <mi>f</mi> <mo>^</mo> </mover> </mstyle> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> denotes the local nearest neighbor feature; ⨀ denotes Hadamard product; <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> denotes the attention score).</p> "> Figure 3
<p>Schematic of feature decoding in <math display="inline"><semantics> <mi>l</mi> </semantics></math>th layer. (<math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>l</mi> </msub> </mrow> </semantics></math> denotes the number of point clouds in the lth encoding layer, 3 denotes the coordinate feature dimensions (x, y, z), and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math> denotes the feature dimensions other than coordinates.)</p> "> Figure 4
<p>The network structure of the PowerLine-Net. (“RS” denotes random down-sampling on point clouds; “FPS” denotes farthest point down-sampling on point clouds; “FP” indicates feature propagation. The sizes of MLP on <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> </mrow> </semantics></math> (<span class="html-italic">i</span> = 1, 2, … 5) are [32,32,64] for <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>0</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, [64,64,128] for <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, [128,128,256] for <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, [256,256,512] for <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>4</mn> </msub> </mrow> </semantics></math>, [512,512,1024] for <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>4</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>5</mn> </msub> </mrow> </semantics></math>, [1024,1024,512] for <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>5</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>5</mn> </msub> </mrow> </semantics></math>, [512,512] for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>5</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>4</mn> </msub> </mrow> </semantics></math>, [512,256] for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>4</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>3</mn> </msub> </mrow> </semantics></math>, [256,256] for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>3</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>2</mn> </msub> </mrow> </semantics></math>, [256,128] for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>2</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>1</mn> </msub> </mrow> </semantics></math>, and [128,128,128] for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>1</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>C</mi> <mo>^</mo> </mover> <mn>0</mn> </msub> </mrow> </semantics></math>, respectively. The activation function is ReLU.)</p> "> Figure 5
<p>Parts of original point cloud data.</p> "> Figure 6
<p>Point clouds of the EHVTL dataset with labeling.</p> "> Figure 7
<p>Point cloud numbers of each category in the EHVTL dataset. (<b>a</b>) Training dataset; (<b>b</b>) test dataset.</p> "> Figure 8
<p>Comparison of three down-sampling methods.</p> "> Figure 9
<p>Qualitative results of the EHVTL dataset for different networks. (<b>a</b>) Ground truth; (<b>b</b>) PointCNN; (<b>c</b>) KPConv; (<b>d</b>) SPG; (<b>e</b>) PointNet++; (<b>f</b>) RandLA-Net; (<b>g</b>) PowerLine-Net.</p> "> Figure 10
<p>Schematic of the safety distances.</p> "> Figure 11
<p>Schematic of EHVTL cross-sectional point cloud data with risk points. (<b>a</b>) Top view of the EHVTL section; (<b>b</b>) front view of the EHVTL section; (<b>c</b>) side view of the EHVTL section; the coordinates of the risk point (x, y, z): (227.75, 57.91, 18.73); the coordinates of the power line risk point (x, y, z): (227.75, 54.77, 23.83). The pink points: risk points; the pink triangle: the location of risk points.</p> "> Figure 12
<p>Qualitative results on the Semantic3D(reduced-8) test dataset for the PowerLine-Net.</p> ">
Abstract
:1. Introduction
2. Network Architecture
2.1. Two-Step Down-Sampling
2.2. Local Feature Aggregation after Down-Sampling
2.3. Network Structure of PowerLine-Net
- (a)
- Assigning each point in the data set a random value in the range of as the initial screening value.
- (b)
- Taking the point with the smallest screening value within the data set as the centroid and then adding minimal perturbation to the screening value of that centroid.
- (c)
- Obtaining the closest points around this centroid as a single sample data using the KNN search algorithm. denotes the number of points in a single sample data.
- (d)
- Calculating the category weights of the selected sample points within the dataset to which they belong. The category weight for each point of the training dataset is set to the ratio of the number of points in the category corresponding to that point to the total number of points and that of the test dataset is set to 1. The screening value of each point is then updated using Equation (6).
- (e)
- Repeating the above steps until the number of sample data required for a single training is obtained.
3. EHVTL Dataset Construction
3.1. Basic Information on EHVTL Point Cloud Data
3.2. Dataset Production
- (1)
- Pylons (shown in blue in Figure 6) are usually divided into two types according to the function: linear pylons and tension-resistant pylons, of which linear pylons generally account for more than 80% of pylons. Tension-resistant pylons are built to anchor conductors, limiting the scope of line faults and facilitating construction and maintenance. The two pylons are not differentiated in the data set due to their similar appearance.
- (2)
- Ground wires (shown in yellow in Figure 6) are set to protect the conductors from lightning strikes. In 500 kV EHVTLs, two ground wires are usually utilized, and they are erected on top of the entire line, thus placing all the transmission lines within its protection.
- (3)
- Conductor (shown in red in Figure 6) is a metal wire fixed on the pylon to carry the current. For the 500 kV conductors, it is mostly located below the ground wire and is distributed in two layers.
- (4)
- Vegetation (shown in green in Figure 6) is mainly arable land, forest, low vegetation beside the road, and so on. Given that most EHVTLs are erected in a field environment with high vegetation coverage, the point cloud of this category still accounts for a relatively large proportion as shown in Figure 6.
- (5)
- Building (shown in gray in Figure 6) is mainly some residential housing built close to EHVTLs and so on.
- (6)
4. Network Experiments
4.1. EHVTL Dataset-Based Experiments
4.1.1. Network Architecture Testing
- (1)
- Efficiency comparison experiments of different encoding strategies
- (2)
- The comparison of semantic segmentation accuracy for different encoding layer structures
- Comparison of encoding strategies.
- 2.
- Comparison of the encoding layer numbers
- 3.
- Comparison of the down-sampling rate
Encoding Layer Structure | OA | IoU | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
No. | Number of Layers | Encoding Strategy | Down-Sampling Rate | mIoU | #1 | #2 | #3 | #4 | #5 | #6 | |
Ex. 1 | 5 | TS&LFAPAD | [4,4,4,2,2] | 98.60 | 91.45 | 97.61 | 88.88 | 98.40 | 99.33 | 95.15 | 69.35 |
Ex. 2 | 5 | TS&LFAPAD | [4,4,4,4,2] | 97.06 | 88.13 | 95.54 | 89.03 | 98.97 | 99.01 | 92.08 | 54.12 |
Ex. 3 | 4 | TS&LFAPAD | [4,4,4,4] | 97.11 | 85.11 | 95.75 | 90.21 | 97.48 | 98.51 | 89.94 | 38.74 |
Ex. 4 | 5 | RS&LFAPBD | [4,4,4,2,2] | 96.33 | 82.25 | 94.25 | 79.61 | 91.45 | 98.76 | 91.04 | 37.10 |
Ex. 5 | 5 | RS&LFAPBD | [4,4,4,4,2] | 96.32 | 82.18 | 95.12 | 85.98 | 87.91 | 96.24 | 92.21 | 35.59 |
Ex. 6 | 4 | RS&LFAPBD | [4,4,4,4] | 90.32 | 59.50 | 91.32 | 77.11 | 11.54 | 76.91 | 84.11 | 15.99 |
4.1.2. Comparison of Different Deep Neural Networks
4.1.3. Risk Point Detection Based on Semantic Segmentation Results
4.2. Experiments on the Semantic3D Dataset
5. Discussions
5.1. Experiments for the PowerLine-Net Network Based on the EHVTL Dataset
5.1.1. Comparison Analysis of Network Architecture
5.1.2. Comparison Analysis of the PowerLine-Net and Mainstream Networks
5.1.3. Application of Risk Point Detection on EHVTL Point Clouds
5.2. Experiments for the PowerLine-Net Network Based on the Semantic3D Dataset
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Fitiwi, D.Z.; De Cuadra, F.; Olmos, L.; Rivier, M. A new approach of clustering operational states for power network expansion planning problems dealing with RES (renewable energy source) generation operational variability and uncertainty. Energy 2015, 90, 1360–1376. [Google Scholar] [CrossRef]
- Ye, L.; Liu, Q.; Hu, Q.W. Research of Power Line Fitting and Extraction Techniques Based on Lidar Point Cloud Data. Geomat. Spat. Inf. Technol. 2010, 33, 30–34. [Google Scholar]
- Yu, J.; Mu, C.; Feng, Y.M.; Dou, Y.J. Powerlines Extraction Techniques from Airborne LiDAR Data. Geomat. Inf. Sci. Wuhan Univ. 2011, 36, 1275–1279. [Google Scholar]
- Zhang, W.M.; Yan, G.J.; Li, Q.Z.; Zhao, W. 3D Power Line Reconstruction by Epipolar Constraint in Helicopter Power Line Inspection System. J. Beijing Norm. Univ. 2006, 42, 629–632. [Google Scholar]
- Li, X.; Wen, C.C.; Cao, Q.M.; Du, Y.L.; Fang, Y. A novel semi-supervised method for airborne LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens. 2021, 180, 117–129. [Google Scholar] [CrossRef]
- Hooper, B. Vegetation Management Takes to the Air. Transm. Distrib. World 2003, 55, 78–85. [Google Scholar]
- Ahmad, J.; Malik, A.; Xia, L.K.; Ashikin, N. Vegetation encroachment monitoring for transmission lines right-of-ways: A survey. Electr. Power Syst. Res. 2013, 95, 339–352. [Google Scholar] [CrossRef]
- Xu, Z.J.; Wang, Z.Z.; Yang, F. Airborne Laser Radar Measurement Technology and the Engineering Application Practice; Wuhan University Press: Wuhan, China, 2009. [Google Scholar]
- McLaughlin, R.A. Extracting Transmission Lines From Airborne LIDAR Data. IEEE Geosci. Remote Sens. Lett. 2006, 3, 222–226. [Google Scholar] [CrossRef]
- Zhou, R.Q.; Xu, Z.H.; Peng, C.G.; Zhang, F.; Jiang, W.S. A Joint Boost-based classification method of high voltage transmission corridor from airborne LiDAR point cloud. Sci. Surv. Mapp. 2019, 44, 21–27. [Google Scholar]
- Zhang, J.Y.; Zhao, X.L.; Chen, Z.; Lu, Z.C. A review of deep learning-based semantic segmentation for point cloud. IEEE Access 2019, 7, 179118–179133. [Google Scholar] [CrossRef]
- Shi, C.H.; Li, J.; Gong, J.H.; Yang, B.H.; Zhang, G.Y. An improved lightweight deep neural network with knowledge distillation for local feature extraction and visual localization using images and LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2022, 184, 177–188. [Google Scholar] [CrossRef]
- Graham, B.; Engelcke, M.; van der Maaten, L. 3D semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; Volume 1, pp. 9224–9232. [Google Scholar]
- Meng, H.Y.; Gao, L.; Lai, Y.K.; Manocha, D. VV-net: Voxel VAE Net with group convolutions for point cloud segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 1, pp. 8499–8507. [Google Scholar]
- Chen, X.Z.; Ma, H.M.; Wan, J.; Li, B.; Xia, T. Multi-view 3D Object Detection Network for Autonomous Driving. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; Volume 1, pp. 6526–6534. [Google Scholar]
- Yang, B.; Luo, W.; Urtasun, R. PIXOR: Real-time 3D object detection from point clouds. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; Volume 1, pp. 7652–7660. [Google Scholar]
- Zhang, R.; Li, G.Y.; Li, M.L.; Wang, L. Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning. ISPRS J. Photogramm. Remote Sens. 2018, 143, 85–96. [Google Scholar] [CrossRef]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Beijbom, O. PointPillars: Fast encoders for object detection from point clouds. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 1, pp. 12689–12697. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; Volume 1, pp. 77–85. [Google Scholar]
- Li, Y.Y.; Bu, R.; Sun, M.C.; Wu, W.; Di, X.H.; Chen, B.Q. Pointcnn: Convolution on x-transformed points. Adv. Neural Inf. Process. Syst. 2018, 31, 828–838. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the 2019 IEEE/CVF international conference on computer vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: Piscataway, NJ, USA, 2019; Volume 1, pp. 6411–6420. [Google Scholar]
- Landrieu, L.; Simonovsky, M. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; Volume 1, pp. 4558–4567. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 4 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; Volume 1, pp. 5105–5114. [Google Scholar]
- Lai, X.; Liu, J.H.; Jiang, L.; Wang, L.W.; Zhao, H.S.; Liu, S.; Qi, X.J.; Jia, J.Y. Stratified transformer for 3d point cloud segmentation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8500–8509. [Google Scholar]
- Qian, G.C.; Li, Y.C.; Peng, H.W.; Mai, J.J.; Hammoud, H.; Elhoseiny, M.; Ghanem, B. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Adv. Neural Inf. Process. Syst. 2022, 35, 23192–23204. [Google Scholar]
- Hu, Q.Y.; Yang, B.; Xie, L.H.; Rosa, S.; Guo, Y.L.; Wang, Z.H.; Trigoni, N.; Markham, A. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; Volume 1, pp. 11105–11114. [Google Scholar]
- Fan, S.Q.; Dong, Q.L.; Zhu, F.H.; Lv, Y.S.; Ye, P.J.; Wang, F.Y. SCF-Net: Learning spatial contextual features for large-scale point cloud segmentation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 1, pp. 14504–14513. [Google Scholar]
- Deng, S.; Dong, Q. GA-NET: Global attention network for point cloud semantic segmentation. IEEE Signal Process. Lett. 2021, 28, 1300–1304. [Google Scholar] [CrossRef]
- Chen, Z.Y.; Peng, S.W.; Zhu, H.D.; Zhang, C.T.; Xi, X.H. LiDAR Point Cloud Classification of Transmission Corridor based on Sample Weighted-PointNet++. Remote Sens. Technol. Appl. 2021, 36, 1299–1305. [Google Scholar]
- Sheshappanavar, S.V.; Kambhamettu, C. A novel local geometry capture in pointnet++ for 3D classification. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; Volume 1, pp. 262–263. [Google Scholar]
- Chen, Y.; Liu, G.L.; Xu, Y.M.; Pan, P.; Xing, Y. PointNet++ network architecture with individual point level and global features on centroid for ALS point cloud classification. Remote Sens. 2021, 13, 472. [Google Scholar] [CrossRef]
- Vishwanath, K.V.; Gupta, D.; Vahdat, A.; Yocum, K. Modelnet: Towards a datacenter emulation environment. In Proceedings of the 2009 IEEE Ninth International Conference on Peer-to-Peer Computing, Seattle, WA, USA, 9–11 September 2009; Volume 1, pp. 81–82. [Google Scholar]
- Dai, A.; Chang, A.X.; Savva, M.; Halber, M.; Funkhouser, T.; Nießner, M. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; Volume 1, pp. 2432–2443. [Google Scholar]
- Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3d.net: A new large-scale point cloud classification benchmark. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 41, 91–98. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; IEEE: Piscataway, NJ, USA, 2012; Volume 1, pp. 3354–3361. [Google Scholar]
- Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Stilla, U. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
- Yang, B.S.; Han, X.; Dong, Z. Point Cloud Benchmark Dataset WHU-TIS and WHU-MLS for Deep Learning. J. Remote Sens. 2021, 25, 231–240. [Google Scholar]
- Jiang, L.X.; Cai, Z.H.; Wang, D.H.; Jiang, S.W. Survey of improving k-nearest-neighbor for classification. In Proceedings of the 2007 IEEE Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Haikou, China, 24–27 August 2007; IEEE: Piscataway, NJ, USA, 2007; Volume 1, pp. 679–683. [Google Scholar]
- Vosselman, G. Slope based filtering of laser altimetry data. Int. Arch. Photogramm. Remote Sens. 2000, 33, 935–942. [Google Scholar]
- Zhang, K.Q.; Chen, S.C.; Whitman, D.; Shyu, M.L.; Yan, J.H.; Zhang, C.C. A progressive morphological filter for removing nonground measurements from airborne LIDAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 872–882. [Google Scholar] [CrossRef]
- Axelsson, P. DEM Generation from Laser scanner Data Using Adaptive TIN Model. Int. Arch. Photogramm. Remote Sens. 2000, 23, 110–117. [Google Scholar]
- Zhang, J.X.; Lin, X.G. Filtering airborne LiDAR data by embedding smoothness-constrained segmentation in progressive TIN densification. ISPRS J. Photogramm. Remote Sens. 2013, 81, 44–59. [Google Scholar] [CrossRef]
- Zhang, W.M.; Qi, J.B.; Wan, P.; Wang, H.T.; Xie, D.H.; Wang, X.Y.; Yan, G.J. An easy-to-use airborne Li-DAR data filtering method based on cloth simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
Encoding Strategies | Down-Sampling (ms) | Feature Extraction (ms) | Total (ms) |
---|---|---|---|
TS&LFAPAD | 201.43 | 519.98 | 721.41 |
RS&LFAPBD | 2.71 | 783.59 | 784.30 |
FPS&MLPs | 2213.36 | 2478.01 | 4691.37 |
Networks | Time | OA | IoU | ||||||
---|---|---|---|---|---|---|---|---|---|
mIoU | #1 | #2 | #3 | #4 | #5 | #6 | |||
PointCNN [20] | 660 | 85.03 | 49.44 | 80.70 | 39.12 | 51.93 | 40.83 | 76.95 | 7.09 |
KPConv [21] | 566 | 96.09 | 68.62 | 96.96 | 65.62 | 56.92 | 77.84 | 94.96 | 19.43 |
SPG [22] | 720 | 79.62 | 49.35 | 71.04 | 58.54 | 32.17 | 70.71 | 61.80 | 1.84 |
PointNet++ [23] | 1077 | 77.25 | 53.04 | 75.26 | 42.43 | 77.66 | 77.66 | 28.86 | 1.06 |
RandLA-Net [26] | 593 | 96.33 | 82.25 | 94.25 | 79.61 | 91.45 | 98.76 | 91.04 | 37.10 |
PowerLine-Net (Ours) | 510 | 98.60 | 91.45 | 97.61 | 88.88 | 98.40 | 99.33 | 95.15 | 69.35 |
Objects | Crossover | Vegetation | Building | Ground |
---|---|---|---|---|
safety distance (m) | 6 | 7 | 9 | 11 |
ID | Section No. | Span | Object | Horizontal Distance | Vertical Distance | Clearance Distance | Safety Distance |
---|---|---|---|---|---|---|---|
1 | 12–13 | 396.19 | vegetation | 3.14 | 5.10 | 5.99 | 7 |
Network Name | Time | OA | IoU | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
mIoU | Man-Made Terrain | Natural Terrain | High Vegetation | Low Vegetation | Buildings | Hard Scape | Scanning Artefacts | Cars | |||
PointCNN [20] | 1430 | 92.2 | 71.8 | 89.1 | 82.4 | 85.5 | 51.5 | 94.1 | 38.4 | 59.3 | 68.7 |
KPConv [21] | 600 | 92.9 | 74.6 | 90.9 | 82.2 | 84.2 | 47.9 | 94.9 | 40.0 | 77.3 | 79.7 |
SPG [22] | 3000 | 94.0 | 73.2 | 97.4 | 92.6 | 87.9 | 44.0 | 93.2 | 31.0 | 63.5 | 76.2 |
PointNet++ [23] | 3572 | 92.0 | 62.4 | 96.3 | 92.1 | 84.4 | 15.4 | 93.3 | 29.2 | 18.3 | 70.4 |
RandLA-Net [26] | 670 | 94.8 | 77.4 | 95.6 | 91.4 | 86.6 | 51.5 | 95.7 | 51.5 | 69.8 | 76.8 |
PowerLine-Net | 594 | 93.6 | 77.2 | 94.4 | 88.5 | 84.9 | 53.8 | 94.1 | 52.2 | 72.4 | 76.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, H.; Wang, Z.; Zhou, Q.; Ma, Y.; Wang, Z.; Liu, H.; Ran, C.; Wang, S.; Zhou, X.; Zhang, X. Deep-Learning-Based Semantic Segmentation Approach for Point Clouds of Extra-High-Voltage Transmission Lines. Remote Sens. 2023, 15, 2371. https://doi.org/10.3390/rs15092371
Yu H, Wang Z, Zhou Q, Ma Y, Wang Z, Liu H, Ran C, Wang S, Zhou X, Zhang X. Deep-Learning-Based Semantic Segmentation Approach for Point Clouds of Extra-High-Voltage Transmission Lines. Remote Sensing. 2023; 15(9):2371. https://doi.org/10.3390/rs15092371
Chicago/Turabian StyleYu, Hao, Zhengyang Wang, Qingjie Zhou, Yuxuan Ma, Zhuo Wang, Huan Liu, Chunqing Ran, Shengli Wang, Xinghua Zhou, and Xiaobo Zhang. 2023. "Deep-Learning-Based Semantic Segmentation Approach for Point Clouds of Extra-High-Voltage Transmission Lines" Remote Sensing 15, no. 9: 2371. https://doi.org/10.3390/rs15092371
APA StyleYu, H., Wang, Z., Zhou, Q., Ma, Y., Wang, Z., Liu, H., Ran, C., Wang, S., Zhou, X., & Zhang, X. (2023). Deep-Learning-Based Semantic Segmentation Approach for Point Clouds of Extra-High-Voltage Transmission Lines. Remote Sensing, 15(9), 2371. https://doi.org/10.3390/rs15092371