Automated Fillet Weld Inspection Based on Deep Learning from 2D Images
<p>Scheme of the FCAW welding process.</p> "> Figure 2
<p>Scheme of the GMAW welding process.</p> "> Figure 3
<p>On the left side, Fanuc 200i-D 7L robotic arm equipped with a welding torch. In the background of this image, you can see the gas bottles (Argon/Carbon Dioxide) in their conveniently mixed proportions. On the right side is the Lilcoln R450 CE Multi-Process Welding Machine, placed under the table of the robotic arm and connected to it.</p> "> Figure 4
<p>Steel plate where numbered seams were welded and then treated according to the experiment to be carried out.</p> "> Figure 5
<p>Equipment utilized for the capture of images in various positions and luminosities included a high-precision camera affixed to the end effector of the robotic arm and a luminaire positioned in different locations, contingent upon the intended image, with the objective of attaining a series of images exhibiting the most diverse range of luminosities feasible, thereby facilitating a more comprehensive training experience.</p> "> Figure 6
<p>Diagram illustrates the methodological framework employed in this study. The process initiates with the fabrication of the welds necessary for the experimental studies, followed by the acquisition of images of these welds. Subsequently, a series of image transformations are performed to train three models, one for each experiment, capable of detecting the manufactured weld seams.</p> "> Figure 7
<p>Industrial camera brand Ensenso model N35 (IDS-IMAGING, Germany), used to take images of weld seams.</p> "> Figure 8
<p>Mild steel plate with several welding beads, labeled with the online tool Roboflow, so that the system can detect a type of weld manufactured correctly compared to another weld manufactured with some defect.</p> "> Figure 9
<p>Set of images of the FCAW-GMAW dataset, where the predicted label and the percentage of that prediction can be observed. An irregular character of the image content can be observed, where the welding bead occupies practically all the space of the image.</p> "> Figure 10
<p>Training curves and performance metrics for the YOLOv8s object detection model trying to detect FCAW and GMAW weld seams. In all of them, we have the training epochs on the x-axis, while the y-axis represents the loss values, both without units. The curves show the learning of the model, observing a significant decrease in the loss while at the same time improving the precision, recall, and mAP50 scores, which leads us to think that the training has been effective.</p> "> Figure 11
<p>Plate of fillet weld beads where different beads can be seen, some labeled as GOOD and others as BAD, according to what the algorithm has learned once trained.</p> "> Figure 12
<p>Training curves and performance metrics for the YOLOv8s object detection model trying to detect weld seams manufactured without defects (labeled as GOOD) and weld seams with some manufacturing defects (labeled as BAD). The x-axis shows the training epochs, while the y-axis shows the loss values, both without units. The curves show the learning of the model, observing a significant decrease in loss, while the precision, recall, and mAP50 scores improve, which leads us to think that the training has been effective.</p> "> Figure 13
<p>Plate of fillet weld beads analyzed with the model obtained in experiment 3. It shows three of the four types of weld beads (objects) for which the model of this experiment has been trained. In addition, the image shows other elements that the model is able to discard.</p> "> Figure 14
<p>Training curves and performance metrics for the YOLOv8s object detection model, where we try to detect correctly made weld seams, without any defects (labeled as GOOD), and weld seams with some manufacturing defect, labeling and classifying several of these most common defects (labeled as UNDER for Undercuts, LOP for Lack Of Penetration, and OP for Other problems). The x-axis shows the training epochs, while the y-axis shows the loss values, both without units. The curves show the learning of the model; it is observed that the loss is significant, although somewhat milder than in the two previous experiments. Same case as in the precision, recall, and mAP50 scores that, although lower than before, we can deduce that the training has been effective.</p> ">
Abstract
:1. Introduction
1.1. Background
1.2. Related Works
1.3. Contribution
- A set of images depicting weld seams has been developed and made available to the scientific community. This set includes images of seams that have been deemed acceptable as well as seams that exhibit defects, such as a lack of penetration or undercuts. These images have been taken with FCAW and GMAW welding differently. The images are captured with high-precision and high-quality camera instead of X-ray images, as has traditionally been done in other works.
- The development of a methodology with a series of steps that can be used in other research dealing with the detection of objects through 2D images.
- The study and results of three experiments based on the application of neural networks through the analysis of 2D images: the detection of the type of FCAW/GMAW weld, the verification of the goodness of a weld, and the detection of certain defects in a weld seam.
1.4. Organization
2. Materials and Methods
2.1. Framework
2.2. YOLOv8 Architecture
2.3. Preparation of Fillet Weld Test Piece
- Prior to the assembly process, the component must be placed in an L-shape, with two plates of the designated metal forming a 90° angle. This configuration is to be achieved by using a web and a flange, with dimensions of these components complying with those stipulated in Section 5.1 of the electrode classification standard [37].
- The subsequent essay will examine the position and conditions of welding. In this instance, the recommendations of the ISO 6947 [38] standard were adhered to with regard to electrode temperature and single-pass weld deposit. It is recommended that the reader consult Section 5.2 of the standard document ISO 15792-3:2011 [36].
- In order to comply with Section 5.3 of the standard document ISO 15792-3:2011 [36], welding speed recommendations were adhered to in accordance with the consumable used.
- It was determined that welding of the second side was not a prerequisite, thus rendering point 5.4 of the standard document ISO 15792-3:2011 [36] superfluous.
- In relation to the dimensions of the weld throat thickness, the robotic arm’s welding parameters were utilized as the reference point. Subsequent to this, a measurement of the throat thickness was conducted, exhibiting a certain degree of flexibility with respect to the standard specifications. This resulted in a range of ±10–15% for the measurement data.
- In order to comply with the measurements and requirements established in Section 6.2 of the standard document ISO 15792-3:2011 [36], the parameters were established in the robotic arm that was used to perform the welding with maximum precision. The welding beads were then measured, as well as the throat thickness. A convex fillet weld was established.
- It is evident that Section 6.2 of the standard has not been given due consideration, as it was understood that it had already been incorporated into prior preparations in other sections.
- Visual inspection of defects and correct welds.
- Measurement of fillet leg lengths.
- Visualization of the correct convexity of the fillet.
- Verification of the throat, as previously indicated to the welding robot in the parameters.
2.4. Methodology
2.4.1. Data Acquisition (DA)
2.4.2. Feature Extraction (FE)
2.4.3. Data Preprocessing (DP)
2.4.4. Data Augmentation (DAU)
- Horizontal flips: this effect will reflect the images horizontally, increasing the variety of vertex orientations.
- Shear: add variability to perspective to help the model be more resilient to camera and subject pitch and yaw.
- Noise: the incorporation of noise is instrumental in enhancing the resilience of our model to camera artifacts.
2.4.5. Hyper-Parameters Selection (HS)
2.4.6. Deep Learning Model (DLM)
2.4.7. Performance Metrics (PM)
- Recall (R): It is also called sensitivity or TPR (true positive rate), representing the ability of the classifier to detect all cases that are positive, Equation (1).TP (True Positive) represents the number of times a positive sample is classified as positive, i.e., correctly. On the other hand, FN (False Negative) tells us the number of times a negative sample is classified incorrectly.
- Precision (P): Controls how capable the classifier is to avoid incorrectly classifying positive samples. Its definition can be seen in Equation (2).In this case, FP (false positive) tells us how many times negative samples are classified as positive.
- Intersection over union (IoU): is a critical metric in object detection as it provides a quantitative measure of the degree of overlap between a ground truth (gt) bounding box and a predicted (pd) bounding box generated by the object detector. This metric is highly relevant for assessing the accuracy of object detection models and is used to define key terms such as true positive (TP), false positive (FP), and false negative (FN). It needs to be defined because it will be used to determine the mAP metric. Its definition can be seen in Equation (3).
- Mean Average Precision (mAP): in object detection is able to evaluate model performance by considering Precision and Recall across multiple object classes. Specifically, mAP50 focuses on an IoU threshold of 0.5, which measures how well a model identifies objects with reasonable overlap. Higher mAP50 scores indicate better overall performance.For a more comprehensive evaluation, mAP50:95 extends the evaluation to a range of IoU thresholds from 0.5 to 0.95. This metric is appropriate for tasks that require precise localization and fine-grained object detection.mAP50 and mAP50:95 are able to help evaluate model performance across multiple conditions and classes, thereby expressing information about object detection accuracy by considering the trade-off between Precision and Recall.Models with higher mAP50 and mAP50-95 scores are more reliable and suitable for demanding applications. These are appropriate metrics to ensure success in projects such as autonomous driving and safety monitoring.
- Box loss: this loss helps the model to learn the correct position and size of the bounding boxes around the detected objects. It focuses on minimizing the error between the predicted boxes and the ground truth.
- Class loss: the accuracy of object classification is of paramount importance. The system is designed to ensure that each detected object is correctly classified into one of the predefined categories.
- Object loss: object loss is responsible for choosing between objects that are very similar or difficult to differentiate, by better understanding their characteristics and spatial information.
3. Results and Discussion
3.1. Experimental Environment
3.2. Experiment Results and Discussion
3.2.1. Experiment 1: FCAW-GMAW Weld Seam
3.2.2. Experiment 2: Good-Bad Weld Seam
3.2.3. Experiment 3: Good-Lop-Under Weld Seam
4. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
YOLO | You Only Look Once |
CNN | Convolutional Neural Network |
RPN | Region Proposal Network |
GMAW | Gas Metal Arc Welding |
FCAW | Flux Cored Arc Welding |
ANN | Artificial Neural Network |
DA | Data Acquisition |
FE | Feature extraction |
DP | Data Preprocessing |
DAU | Data Augmentation |
DL | Deep Learning |
NAS | Neural Architecture Search |
PM | Performance Metrics |
R | Recall |
P | Precision |
TP | True Positive |
FP | False Positive |
gt | Ground Truth |
pd | Predicted Box |
IoU | Intersection over Union |
AP | Average Precision |
mAP | Mean Average Precision |
TEP940 | Applied Robotics Research Group of the University of Cadiz |
ROI | Region Of Interest |
SSD | Single Shot Detector |
References
- Oh, S.J.; Jung, M.J.; Lim, C.; Shin, S.C. Automatic detection of welding defects using faster R-CNN. Appl. Sci. 2020, 10, 8629. [Google Scholar] [CrossRef]
- Mohamat, S.A.; Ibrahim, I.A.; Amir, A.; Ghalib, A. The Effect of Flux Core Arc Welding (FCAW) Processes on Different Parameters. Procedia Eng. 2012, 41, 1497–1501. [Google Scholar] [CrossRef]
- Ibrahim, I.A.; Mohamat, S.A.; Amir, A.; Ghalib, A. The Effect of Gas Metal Arc Welding (GMAW) Processes on Different Welding Parameters. Procedia Eng. 2012, 41, 1502–1506. [Google Scholar] [CrossRef]
- Katherasan, D.; Elias, J.V.; Sathiya, P.; Haq, A.N. Simulation and parameter optimization of flux cored arc welding using artificial neural network and particle swarm optimization algorithm. J. Intell. Manuf. 2014, 25, 67–76. [Google Scholar] [CrossRef]
- Ho, M.P.; Ngai, W.K.; Chan, T.W.; Wai, H.w. An artificial neural network approach for parametric study on welding defect classification. Int. J. Adv. Manuf. Technol. 2021, 1, 3. [Google Scholar] [CrossRef]
- Kim, I.S.; Son, J.S.; Park, C.E.; Lee, C.W.; Prasad, Y.K. A study on prediction of bead height in robotic arc welding using a neural network. J. Mater. Process. Technol. 2002, 130–131, 229–234. [Google Scholar] [CrossRef]
- Bronstein, M.M.; Bruna, J.; Cohen, T.; Veličković, P. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. arXiv 2021, arXiv:2104.13478. [Google Scholar]
- Liu, F.; Tao, C.; Dong, Z.; Jiang, K.; Zhou, S.; Zhang, Z.; Shen, C. Prediction of welding residual stress and deformation in electro-gas welding using artificial neural network. Mater. Today Commun. 2021, 29, 102786. [Google Scholar] [CrossRef]
- Zhang, Z.; Li, B.; Zhang, W.; Lu, R.; Wada, S.; Zhang, Y. Real-time penetration state monitoring using convolutional neural network for laser welding of tailor rolled blanks. J. Manuf. Syst. 2020, 54, 348–360. [Google Scholar] [CrossRef]
- Feng, T.; Huang, S.; Liu, J.; Wang, J.; Fang, X. Welding Surface Inspection of Armatures via CNN and Image Comparison. IEEE Sens. J. 2021, 21, 21696–21704. [Google Scholar] [CrossRef]
- Nele, L.; Mattera, G.; Vozza, M. Deep Neural Networks for Defects Detection in Gas Metal Arc Welding. Appl. Sci. 2022, 12, 3615. [Google Scholar] [CrossRef]
- Mery, D.; Riffo, V.; Zscherpel, U.; Mondragón, G.; Lillo, I.; Zuccar, I.; Lobel, H.; Carrasco, M. GDXray: The Database of X-ray Images for Nondestructive Testing. J. Nondestruct. Eval. 2015, 34, 42. [Google Scholar] [CrossRef]
- Hartung, J.; Jahn, A.; Stambke, M.; Wehner, O.; Thieringer, R.; Heizmann, M. Camera-based spatter detection in laser welding with a deep learning approach. In Forum Bildverarbeitung 2020; Längle, T., Heizmann, M., Eds.; KIT Scientific Publishing: Karlsruhe, Germany, 2020; pp. 317–328. [Google Scholar] [CrossRef]
- Nacereddine, N.; Goumeidane, A.B.; Ziou, D. Unsupervised weld defect classification in radiographic images using multivariate generalized Gaussian mixture model with exact computation of mean and shape parameters. Comput. Ind. 2019, 108, 132–149. [Google Scholar] [CrossRef]
- Deng, H.; Cheng, Y.; Feng, Y.; Xiang, J. Industrial laser welding defect detection and image defect recognition based on deep learning model developed. Symmetry 2021, 13, 1731. [Google Scholar] [CrossRef]
- Ajmi, C.; Zapata, J.; Martínez-Álvarez, J.J.; Doménech, G.; Ruiz, R. Using Deep Learning for Defect Classification on a Small Weld X-ray Image Dataset. J. Nondestruct. Eval. 2020, 39, 68. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Wang, R.; Jiao, L.; Xie, C.; Chen, P.; Du, J.; Li, R. S-RPN: Sampling-balanced region proposal network for small crop pest detection. Comput. Electron. Agric. 2021, 187, 106290. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Advances in Neural Information Processing Systems; Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: San Jose, CA, USA, 2015; Volume 28. [Google Scholar]
- Wang, Y.; Shi, F.; Tong, X. A Welding Defect Identification Approach in X-ray Images Based on Deep Convolutional Neural Networks. In Intelligent Computing Methodologies; Huang, D.S., Huang, Z.K., Hussain, A., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 53–64. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Dai, W.; Li, D.; Tang, D.; Wang, H.; Peng, Y. Deep learning approach for defective spot welds classification using small and class-imbalanced datasets. Neurocomputing 2022, 477, 46–60. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 13–16 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef]
- Yun, G.H.; Oh, S.J.; Shin, S.C. Image Preprocessing Method in Radiographic Inspection for Automatic Detection of Ship Welding Defects. Appl. Sci. 2021, 12, 123. [Google Scholar] [CrossRef]
- Hobbart. Choosing the Right Shielding Gases for Arc Welding|HobartWelders. 2024. Available online: https://www.hobartwelders.com/projects-and-advice/articles/choosing-the-right-shielding-gases-for-arc-welding (accessed on 25 November 2024).
- Lascentrum. Hyundai SC-420MC. 2024. Available online: https://lascentrum.com/en/producten/hyundai-sm-70-eco/ (accessed on 6 December 2024).
- Lascentrum. Hyundai SM-70 eco. 2024. Available online: https://lascentrum.com/en/producten/hyundai-sc-420mc/ (accessed on 6 December 2024).
- Murray_Steel. S275JR Steel Plate. 2024. Available online: https://www.murraysteelproducts.com/products/s275jr (accessed on 6 December 2024).
- Kwon, J.E.; Park, J.H.; Kim, J.H.; Lee, Y.H.; Cho, S.I. Context and scale-aware YOLO for welding defect detection. NDT E Int. Indep. Nondestruct. Test. Eval. 2023, 139, 102919. [Google Scholar] [CrossRef]
- Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
- Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- ISO Standard 15792-3:2011; Welding Consumables—Test Methods. ISO: Geneva, Switzerland, 2024.
- American Welding Society. Specification for Carbon Steel Electrodes for Shielded Metal Arc Welding, 14th ed.; American Welding Society: Doral, FL, USA, 2004. [Google Scholar]
- ISO Standard 6947:2019; Welding and Allied Processes—Welding Positions. ISO: Geneva, Switzerland, 2024.
- IDS. Ensenso N Series. 2024. Available online: https://www.ids-imaging.us/ensenso-3d-camera-n-series.html (accessed on 29 October 2024).
- Shinichi, S.; Muraoka, R.; Obinata, T.; Shigeru, E.; Horita, T.; Omata, K. Steel Products for Shipbuilding; Technical Report, JFE Technical Report; JFE Holdings: Tokyo, Japan, 2004. [Google Scholar]
- Roboflow. Computer Vision Tools for Developers and Enterprises. 2024. Available online: https://roboflow.com/ (accessed on 5 October 2024).
- Puhan, S.; Mishra, S.K. Detecting Moving Objects in Dense Fog Environment using Fog-Aware-Detection Algorithm and YOLO. NeuroQuantology 2022, 20, 2864–2873. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.J.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Hermens, F. Automatic object detection for behavioural research using YOLOv8. Behav. Res. Methods 2024, 56, 7307–7330. [Google Scholar] [CrossRef] [PubMed]
- YOLO-Ultralytics. Performance Metrics—Ultralytics YOLO Docs. Available online: https://docs.ultralytics.com/es/guides/yolo-performance-metrics/ (accessed on 5 October 2024).
- TEP940. Dataset Detection FCAW-GMAW Welding. 2024. Available online: https://universe.roboflow.com/weldingpic/weld_fcaw_gmaw (accessed on 25 October 2024).
- TEP940. Dataset Detection WELD_GOOD_BAD Welding. 2024. Available online: https://universe.roboflow.com/weldingpic/weldgoodbad (accessed on 1 November 2024).
- TEP940. GOOD-OP-LOP-UNDER Dataset. 2024. Available online: https://universe.roboflow.com/weldingpic/good-op-lop-under (accessed on 30 October 2024).
Parameter | Experiment 1 | Experiment 2 | Experiment 3 |
---|---|---|---|
epochs | 105 | 300 | 300 |
batch size | 16 | 16 | 16 |
Learning rate | 0.01 | 0.01 | 0.01 |
Optimizer | SGD | SGD | SGD |
Input image size | 320 × 320 | 320 × 320 | 320 × 320 |
Confidence Threshold | 0.75 | 0.75 | 0.75 |
Experiment | Class | Precision | Recall | mAP Val. Set | mAP Test Set |
---|---|---|---|---|---|
FCAW-GMAW weld seam | FCAW | 0.951 | 0.979 | 0.99 | 0.99 |
GMAW | 0.99 | 0.97 | |||
GOOD-BAD weld seam | GOOD | 0.982 | 0.985 | 0.99 | 0.93 |
BAD | 0.99 | 0.99 | |||
GOOD-LOP-UNDER-OP weld seam | GOOD | 0.965 | 0.92 | 0.99 | 0.99 |
LOP | 0.77 | 0.94 | |||
UNDER | 0.99 | 0.92 | |||
OP | 0.99 | 0.99 |
Proportion | Data | Experiment 1 | Experiment 2 | Experiment 3 | |||||
---|---|---|---|---|---|---|---|---|---|
Classes | Classes | Classes | |||||||
FCAW | GMAW | BAD | GOOD | GOOD | LOP | UNDER | OP | ||
80% | Train set 90% | 1190 | 513 | 1182 | 507 | 564 | 313 | 539 | 322 |
Val set 10% | 132 | 57 | 131 | 56 | 62 | 35 | 60 | 36 | |
20% | Test set | 331 | 142 | 329 | 141 | 156 | 87 | 150 | 90 |
100% | Total data | 1653 | 712 | 1642 | 704 | 782 | 435 | 749 | 448 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Diaz-Cano, I.; Morgado-Estevez, A.; Rodríguez Corral, J.M.; Medina-Coello, P.; Salvador-Dominguez, B.; Alvarez-Alcon, M. Automated Fillet Weld Inspection Based on Deep Learning from 2D Images. Appl. Sci. 2025, 15, 899. https://doi.org/10.3390/app15020899
Diaz-Cano I, Morgado-Estevez A, Rodríguez Corral JM, Medina-Coello P, Salvador-Dominguez B, Alvarez-Alcon M. Automated Fillet Weld Inspection Based on Deep Learning from 2D Images. Applied Sciences. 2025; 15(2):899. https://doi.org/10.3390/app15020899
Chicago/Turabian StyleDiaz-Cano, Ignacio, Arturo Morgado-Estevez, José María Rodríguez Corral, Pablo Medina-Coello, Blas Salvador-Dominguez, and Miguel Alvarez-Alcon. 2025. "Automated Fillet Weld Inspection Based on Deep Learning from 2D Images" Applied Sciences 15, no. 2: 899. https://doi.org/10.3390/app15020899
APA StyleDiaz-Cano, I., Morgado-Estevez, A., Rodríguez Corral, J. M., Medina-Coello, P., Salvador-Dominguez, B., & Alvarez-Alcon, M. (2025). Automated Fillet Weld Inspection Based on Deep Learning from 2D Images. Applied Sciences, 15(2), 899. https://doi.org/10.3390/app15020899