A Surface Defect Inspection Model via Rich Feature Extraction and Residual-Based Progressive Integration CNN
<p>The defect inspection system and dataset generation process.</p> "> Figure 2
<p>The sample images and ground truth of the five types of defects.</p> "> Figure 3
<p>The architecture of the modelling approach via rich feature extraction and residual-based progressive integration.</p> "> Figure 4
<p>Comparison of the proposed feature extraction strategy and the two existing structures. (<b>a</b>) Extraction of features by using the last layer in the deep stage [<a href="#B24-machines-11-00124" class="html-bibr">24</a>,<a href="#B25-machines-11-00124" class="html-bibr">25</a>]. (<b>b</b>) Extraction of features by using the last layer in all stages [<a href="#B36-machines-11-00124" class="html-bibr">36</a>,<a href="#B37-machines-11-00124" class="html-bibr">37</a>]. (<b>c</b>) Proposed: Extraction of features by using all of the layers in all stages.</p> "> Figure 5
<p>The details of the generative process of the <span class="html-italic">i</span>th rich feature extraction block.</p> "> Figure 6
<p>A detailed structural diagram of the residual-based progressive integration scheme.</p> "> Figure 7
<p>A diagram of the predicted results and ground truth.</p> "> Figure 8
<p>Visual comparisons of different predictions. Four types of defects (dents, scratches, spots, and stains) are shown in order.</p> "> Figure 9
<p>Visual comparisons of different defect/object localization models.</p> ">
Abstract
:1. Introduction
- To improve the automation and intelligence levels of defect inspection, this paper proposes a defect inspection system based on a CNN. The proposed model is built based on a lightweight SqueezeNet model.
- We design three effective techniques to improve the performance of the proposed model. Firstly, rich feature extraction blocks are used to capture both semantic and detailed information at different scales. Then, a residual-based progressive feature fusion structure is used to fuse the extracted features at different scales. Finally, the fusion results of defects are supervised in multiple fusion steps.
- To verify the effectiveness of the proposed model, we manually labeled a pixel-level defect dataset, USB-SG, which included markings of the defect locations. The dataset included five defect types—dents, scratches, spots, stains, and normal—with a total of 684 images.
- Our approach could obtain higher detection accuracy compared with that of other machine learning and deep-learning-based methods. The running speed of this model was able to reach real time, and it has wide application prospects in industrial inspection tasks.
2. Defect Inspection System and Proposed Dataset
2.1. Defect Inspection System
2.2. Pixel-Level USB-SG Defect Dataset
3. Proposed Method
3.1. Rich Feature Extraction
3.2. Residual-Based Progressive Integration
3.3. Multi-Step Deep Supervision
4. Experiments
4.1. Implementation Details
4.2. Evaluation Index
4.3. Ablation Study
4.3.1. Rich Feature Extraction
4.3.2. Residual-Based Progressive Integration
4.3.3. Multi-Step Deep Supervision
4.3.4. Loss Function
4.4. Comparisons with Other Models
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhao, Y.J.; Yan, Y.H.; Song, K.C. Vision-based automatic detection of steel surface defects in the cold rolling process: Considering the influence of industrial liquids and surface textures. Int. J. Adv. Manuf. Technol. 2017, 90, 1665–1678. [Google Scholar] [CrossRef]
- Yang, L.; Huang, X.; Ren, Y.; Huang, Y. Steel Plate Surface Defect Detection Based on Dataset Enhancement and Lightweight Convolution Neural Network. Machines 2022, 10, 523. [Google Scholar] [CrossRef]
- Neogi, N.; Mohanta, D.K.; Dutta, P.K. Review of vision-based steel surface inspection systems. EURASIP J. Image Video Process. 2014, 2014, 50. [Google Scholar] [CrossRef] [Green Version]
- Ouyang, W.; Xu, B.; Hou, J.; Yuan, X. Fabric defect detection using activation layer embedded convolutional neural network. IEEE Access 2019, 7, 70130–70140. [Google Scholar] [CrossRef]
- Zhou, X.; Wang, Y.; Xiao, C.; Zhu, Q.; Lu, X.; Zhang, H.; Ge, J.; Zhao, H. Automated visual inspection of glass bottle bottom with saliency detection and template matching. IEEE Trans. Instrum. Meas. 2019, 68, 4253–4267. [Google Scholar] [CrossRef]
- Tsai, D.M.; Huang, Y.Q.; Chiu, W.Y. Deep learning from imbalanced data for automatic defect detection in multicrystalline solar wafer images. Meas. Sci. Technol. 2021, 32, 124003. [Google Scholar] [CrossRef]
- Song, K.; Yan, Y. A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl. Surf. Sci. 2013, 285, 858–864. [Google Scholar] [CrossRef]
- Mak, K.L.; Peng, P.; Yiu, K.F.C. Fabric defect detection using morphological filters. Image Vis. Comput. 2009, 27, 1585–1592. [Google Scholar] [CrossRef]
- Kang, X.; Zhang, E. A universal and adaptive fabric defect detection algorithm based on sparse dictionary learning. IEEE Access 2020, 8, 221808–221830. [Google Scholar] [CrossRef]
- Bissi, L.; Baruffa, G.; Placidi, P.; Ricci, E.; Scorzoni, A.; Valigi, P. Automated defect detection in uniform and structured fabrics using Gabor filters and PCA. J. Vis. Commun. Image Represent. 2013, 24, 838–845. [Google Scholar] [CrossRef]
- Fu, G.; Sun, P.; Zhu, W.; Yang, J.; Cao, Y.; Yang, M.Y.; Cao, Y. A deep-learning-based approach for fast and robust steel surface defects classification. Opt. Lasers Eng. 2019, 121, 397–405. [Google Scholar] [CrossRef]
- Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
- Kim, Y.; Kim, D. A CNN-based 3D human pose estimation based on projection of depth and ridge data. Pattern Recognit. 2020, 106, 107462. [Google Scholar] [CrossRef]
- Tian, C.; Xu, Y.; Zuo, W.; Zhang, B.; Fei, L.; Lin, C.W. Coarse-to-fine CNN for image super-resolution. IEEE Trans. Multimed. 2020, 23, 1489–1502. [Google Scholar] [CrossRef]
- Gao, H.; Cheng, B.; Wang, J.; Li, K.; Zhao, J.; Li, D. Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment. IEEE Trans. Ind. Inform. 2018, 14, 4224–4231. [Google Scholar] [CrossRef]
- Li, Y.; Li, G.; Jiang, M. An End-to-End Steel Strip Surface Defects Recognition System Based on Convolutional Neural Networks. Steel Res. Int. 2016, 88, 1600068. [Google Scholar]
- Benbarrad, T.; Eloutouate, L.; Arioua, M.; Elouaai, F.; Laanaoui, M.D. Impact of Image Compression on the Performance of Steel Surface Defect Classification with a CNN. J. Sens. Actuator Netw. 2021, 10, 73. [Google Scholar] [CrossRef]
- Imoto, K.; Nakai, T.; Ike, T.; Haruki, K.; Sato, Y. A CNN-based transfer learning method for defect classification in semiconductor manufacturing. In Proceedings of the 2018 International Symposium on Semiconductor Manufacturing (ISSM), Tokyo, Japan, 10–11 December 2018; pp. 1–3. [Google Scholar]
- Ren, R.; Hung, T.; Tan, K.C. A generic deep-learning-based approach for automated surface inspection. IEEE Trans. Cybern. 2018, 48, 929–940. [Google Scholar] [CrossRef]
- Wang, T.; Chen, Y.; Qiao, M.; Snoussi, H. A fast and robust convolutional neural network-based defect detection model in product quality control. Int. J. Adv. Manuf. Technol. 2018, 94, 3465–3471. [Google Scholar] [CrossRef]
- Zhang, M.; Wu, J.; Lin, H.; Yuan, P.; Song, Y. The application of one-class classifier based on CNN in image defect detection. Procedia Comput. Sci. 2017, 114, 341–348. [Google Scholar] [CrossRef]
- Huang, Y.; Qiu, C.; Wang, X.; Wang, S.; Yuan, K. A compact convolutional neural network for surface defect inspection. Sensors 2020, 20, 1974. [Google Scholar] [CrossRef] [PubMed]
- Tabernik, D.; Šela, S.; Skvarč, J.; Skočaj, D. Segmentation-based deep-learning approach for surface-defect detection. J. Intell. Manuf. 2020, 31, 759–776. [Google Scholar] [CrossRef] [Green Version]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Achanta, R.; Estrada, F.; Wils, P.; Süsstrunk, S. Salient region detection and segmentation. In Proceedings of the International Conference on Computer Vision Systems, Santorini, Greece, 12–15 May 2008; pp. 66–75. [Google Scholar]
- Zhang, J.; Sclaroff, S. Saliency detection: A boolean map approach. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 153–160. [Google Scholar]
- Cheng, M.M.; Mitra, N.J.; Huang, X.; Torr, P.H.; Hu, S.M. Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 569–582. [Google Scholar] [CrossRef] [Green Version]
- Perazzi, F.; Krähenbühl, P.; Pritch, Y.; Hornung, A. Saliency filters: Contrast based filtering for salient region detection. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 733–740. [Google Scholar]
- Huang, Y.; Qiu, C.; Yuan, K. Surface defect saliency of magnetic tile. Vis. Comput. 2020, 36, 85–96. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Feng, C.; Liu, M.Y.; Kao, C.C.; Lee, T.Y. Deep active learning for civil infrastructure defect detection and classification. Comput. Civ. Eng. 2017, 2017, 298–306. [Google Scholar]
- Yang, G.; Liu, K.; Zhao, Z.; Zhang, J.; Chen, X.; Chen, B.M. Datasets and methods for boosting infrastructure inspection: A survey on defect classification. In Proceedings of the 2022 IEEE 17th International Conference on Control & Automation (ICCA), Naples, Italy, 27–30 June 2022; pp. 15–22. [Google Scholar]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Hou, Q.; Cheng, M.M.; Hu, X.; Borji, A.; Tu, Z.; Torr, P.H. Deeply supervised salient object detection with short connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3203–3212. [Google Scholar]
- Zhang, P.; Wang, D.; Lu, H.; Wang, H.; Ruan, X. Amulet: Aggregating multi-level convolutional features for salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 202–211. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
- Sørensen, T.J. A Method of Establishing Groups of Equal Amplitude in Plant Sociology Based on Similarity of Species Content and Its Application to Analyses of the Vegetation on Danish Commons; Munksgaard: København, Denmark, 1948. [Google Scholar]
- Li, X.; Sun, X.; Meng, Y.; Liang, J.; Wu, F.; Li, J. Dice loss for data-imbalanced NLP tasks. arXiv 2019, arXiv:1911.02855. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
- Yang, C.; Zhang, L.; Lu, H.; Ruan, X.; Yang, M.H. Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 3166–3173. [Google Scholar]
- Zhai, Y.; Shah, M. Visual attention detection in video sequences using spatiotemporal cues. In Proceedings of the 14th ACM International Conference on Multimedia, Santa Barbara, CA, USA, 23–27 October 2006; pp. 815–824. [Google Scholar]
- Zhang, J.; Sclaroff, S.; Lin, Z.; Shen, X.; Price, B.; Mech, R. Minimum barrier salient object detection at 80 fps. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1404–1412. [Google Scholar]
- Achanta, R.; Süsstrunk, S. Saliency detection using maximum symmetric surround. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 2653–2656. [Google Scholar]
- Aiger, D.; Talbot, H. The phase only transform for unsupervised surface defect detection. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Hong Kong, China, 26–29 September 2010; pp. 295–302. [Google Scholar]
- Rudinac, M.; Jonker, P.P. Saliency detection and object localization in indoor environments. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 404–407. [Google Scholar]
- Hou, X.; Zhang, L. Saliency detection: A spectral residual approach. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- Cao, Y.; Fu, G.; Yang, J.; Cao, Y.; Yang, M.Y. Accurate salient object detection via dense recurrent connections and residual-based hierarchical feature integration. Signal Process. Image Commun. 2019, 78, 103–112. [Google Scholar] [CrossRef]
- He, M.; Zhao, Q.; Gao, H.; Zhang, X.; Zhao, Q. Image Segmentation of a Sewer Based on Deep Learning. Sustainability 2022, 14, 6634. [Google Scholar] [CrossRef]
- Nemati, S.; Ghadimi, H.; Li, X.; Butler, L.G.; Wen, H.; Guo, S. Automated Defect Analysis of Additively Fabricated Metallic Parts Using Deep Convolutional Neural Networks. J. Manuf. Mater. Process. 2022, 6, 141. [Google Scholar] [CrossRef]
Defect Type | Total | Training Set | Testing Set |
---|---|---|---|
Dent | 116 | 93 | 23 |
Scratch | 120 | 96 | 24 |
Spot | 68 | 55 | 13 |
Stain | 56 | 45 | 11 |
Normal | 323 | 259 | 64 |
Total | 683 | 548 | 135 |
Model | Precision | Recall | MAE-1 | MAE-2 | Dice | |
---|---|---|---|---|---|---|
SQ-a | 0.5954 | 0.7140 | 0.5958 | 0.0185 | 0.0098 | 0.5587 |
SQ-b | 0.6572 | 0.6788 | 0.6340 | 0.0174 | 0.0094 | 0.6057 |
SQ-RFE | 0.6888 | 0.7061 | 0.6702 | 0.0167 | 0.0088 | 0.6515 |
Model | Precision | Recall | MAE-1 | MAE-2 | Dice | |
---|---|---|---|---|---|---|
SQ-Unet | 0.6572 | 0.6788 | 0.6340 | 0.0174 | 0.0094 | 0.6057 |
SQ-RPI | 0.6988 | 0.7069 | 0.6781 | 0.0170 | 0.0090 | 0.6515 |
SQ-RFE | 0.6888 | 0.7061 | 0.6702 | 0.0167 | 0.0088 | 0.6484 |
SQ-RFE-RPI | 0.7026 | 0.7042 | 0.6862 | 0.0158 | 0.0084 | 0.6640 |
Model | Precision | Recall | MAE-1 | MAE-2 | Dice | |
---|---|---|---|---|---|---|
SQ-RFE-RPI | 0.7026 | 0.7042 | 0.6862 | 0.0158 | 0.0084 | 0.6640 |
SQ-RFE-RPI-MsDS, | 0.7174 | 0.7167 | 0.6983 | 0.0162 | 0.0086 | 0.6821 |
SQ-RFE-RPI-MsDS, | 0.7413 | 0.6995 | 0.7057 | 0.0151 | 0.0080 | 0.6854 |
SQ-RFE-RPI-MsDS, | 0.7168 | 0.7094 | 0.6988 | 0.0160 | 0.0085 | 0.6823 |
SQ-RFE-RPI-MsDS, | 0.7131 | 0.7105 | 0.6918 | 0.0162 | 0.0086 | 0.6734 |
SQ-RFE-RPI-MsDS, | 0.7189 | 0.7134 | 0.6921 | 0.0159 | 0.0084 | 0.6783 |
SQ-RFE-RPI-MsDS, | 0.7114 | 0.7116 | 0.6811 | 0.0165 | 0.0087 | 0.6745 |
Prediction | Precision | Recall | MAE-1 | MAE-2 | Dice | |
---|---|---|---|---|---|---|
0.7413 | 0.6995 | 0.7057 | 0.015113 | 0.00799 | 0.6854 | |
0.7448 | 0.7039 | 0.7054 | 0.015135 | 0.00800 | 0.6801 | |
0.7442 | 0.6999 | 0.7036 | 0.015181 | 0.00803 | 0.6787 | |
0.7354 | 0.6981 | 0.7004 | 0.015306 | 0.00809 | 0.6768 | |
0.7275 | 0.6823 | 0.6924 | 0.015634 | 0.00826 | 0.6484 |
Loss Function | Precision | Recall | MAE-1 | MAE-2 | Dice | |
---|---|---|---|---|---|---|
Cross-entropy | 0.7413 | 0.6995 | 0.7057 | 0.0151 | 0.0080 | 0.6854 |
Dice loss | 0.7125 | 0.7217 | 0.6973 | 0.0160 | 0.0085 | 0.6812 |
Focal loss (, ) | 0.7478 | 0.7352 | 0.7088 | 0.0149 | 0.0078 | 0.6921 |
Focal loss (, ) | 0.7141 | 0.7308 | 0.7002 | 0.0153 | 0.0084 | 0.6765 |
Model | Precision | Recall | MAE-1 | MAE-2 | Dice | |
---|---|---|---|---|---|---|
AC [27] | 0.5328 | 0.4065 | 0.2249 | 0.0505 | 0.0268 | 0.1249 |
BMS [28] | 0.2283 | 0.6310 | 0.2181 | 0.1202 | 0.1303 | 0.3583 |
FT [43] | 0.1687 | 0.6428 | 0.1686 | 0.0702 | 0.0621 | 0.2091 |
GMR [44] | 0.1914 | 0.2783 | 0.1259 | 0.5019 | 0.5169 | 0.2060 |
HC [29] | 0.1880 | 0.6599 | 0.1821 | 0.0884 | 0.0862 | 0.3204 |
LC [45] | 0.1579 | 0.6333 | 0.1586 | 0.0539 | 0.0306 | 0.0048 |
MBP [46] | 0.1400 | 0.4490 | 0.1250 | 0.6497 | 0.7175 | 0.0979 |
MSS [47] | 0.0770 | 0.2445 | 0.0715 | 0.0640 | 0.0407 | 0.0060 |
PHOT [48] | 0.0766 | 0.4374 | 0.0767 | 0.0694 | 0.0481 | 0.0383 |
RC [29] | 0.1813 | 0.5884 | 0.1906 | 0.0611 | 0.0423 | 0.3533 |
Rudinac [49] | 0.1475 | 0.2553 | 0.1084 | 0.2458 | 0.2121 | 0.0637 |
SF [30] | 0.1924 | 0.4672 | 0.1752 | 0.2090 | 0.1967 | 0.0724 |
SR [50] | 0.1220 | 0.1898 | 0.0875 | 0.2180 | 0.2058 | 0.0720 |
U-Net [32] | 0.6263 | 0.7805 | 0.6388 | 0.0183 | 0.0098 | 0.6217 |
Rec [51] | 0.6782 | 0.8279 | 0.6883 | 0.0162 | 0.0085 | 0.6789 |
SegNet [52] | 0.6779 | 0.8277 | 0.6862 | 0.0160 | 0.0084 | 0.6797 |
LPBF-Net [53] | 0.6793 | 0.8159 | 0.6968 | 0.0157 | 0.0082 | 0.6806 |
Proposed | 0.7413 | 0.6995 | 0.7057 | 0.0151 | 0.0080 | 0.6854 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fu, G.; Le, W.; Zhang, Z.; Li, J.; Zhu, Q.; Niu, F.; Chen, H.; Sun, F.; Shen, Y. A Surface Defect Inspection Model via Rich Feature Extraction and Residual-Based Progressive Integration CNN. Machines 2023, 11, 124. https://doi.org/10.3390/machines11010124
Fu G, Le W, Zhang Z, Li J, Zhu Q, Niu F, Chen H, Sun F, Shen Y. A Surface Defect Inspection Model via Rich Feature Extraction and Residual-Based Progressive Integration CNN. Machines. 2023; 11(1):124. https://doi.org/10.3390/machines11010124
Chicago/Turabian StyleFu, Guizhong, Wenwu Le, Zengguang Zhang, Jinbin Li, Qixin Zhu, Fuzhou Niu, Hao Chen, Fangyuan Sun, and Yehu Shen. 2023. "A Surface Defect Inspection Model via Rich Feature Extraction and Residual-Based Progressive Integration CNN" Machines 11, no. 1: 124. https://doi.org/10.3390/machines11010124
APA StyleFu, G., Le, W., Zhang, Z., Li, J., Zhu, Q., Niu, F., Chen, H., Sun, F., & Shen, Y. (2023). A Surface Defect Inspection Model via Rich Feature Extraction and Residual-Based Progressive Integration CNN. Machines, 11(1), 124. https://doi.org/10.3390/machines11010124