[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to content
BY 4.0 license Open Access Published by De Gruyter (O) October 17, 2023

Two-stage quality monitoring of a laser welding process using machine learning

An approach for fast yet precise quality monitoring

Zweistufige Qualitätsüberwachung eines Laser-Schweißprozesses mit Hilfe maschinellen Lernens
  • Patricia M. Dold

    Patricia M. Dold received the B.S. degree and the M.S. degree in electrical engineering and information technology from the Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, in 2019 and 2022, respectively. In 2020, she spent one semester at the Polytechnic University of Valencia (UPV), Valencia, Spain. Currently, her research at Bosch Research, Renningen, Germany, and the Institute for Automation and Applied Informatics (IAI), KIT, Eggenstein-Leopoldshafen, Germany, focuses on distributed multi-sensor quality monitoring of laser-based processes for intelligent production.

    EMAIL logo
    , Fabian Bleier

    Dr. Fabian Bleier is a research engineer in the department of Advanced Production Technologies at Bosch Research in Renningen. His research focuses on the integration of data analytics in manufacturing environments from edge to cloud with a special interest in real-time capable data analytics on resource-restricted devices.

    , Meiko Boley

    Dr. Meiko Boley is a research engineer and works at Bosch Research in Renningen. He wrote his Ph.D. about using optical coherence tomography in laser welding. His current work is centered around process monitoring of laser-based processes in production.

    and Ralf Mikut

    apl. Prof. Dr. Ralf Mikut received the Dipl.-Ing. degree in automatic control from the University of Technology, Dresden, Germany, in 1994, and the Ph.D. degree in mechanical engineering from the University of Karlsruhe, Karlsruhe, Germany, in 1999. Since 2011, he has been an Adjunct Professor at the Faculty of Mechanical Engineering and the head of the research field “Automated Image and Data Analysis”. He is leading the research group “Machine Learning for Time Series and Images” at the Institute for Automation and Applied Informatics of the Karlsruhe Institute of Technology (KIT), Germany. His current research interests include machine learning, image processing, life science applications and smart grids.

Abstract

In production, quality monitoring is essential to detect defective elements. State-of-the-art approaches are single-sensor systems (SSS) and multi-sensor systems (MSS). Yet, these approaches might not be suitable: Nowadays, one component may comprise several hundred meters of the weld seam, necessitating high-speed welding to produce enough components. To detect as many defects as possible in time, fast yet precise monitoring is required. However, information captured by SSS might not be sufficient and MSS suffer from long inference times. Therefore, we present a confidence-based cascaded system (CS). The key idea of the CS is that not all data are analyzed to obtain the quality weld, but only selected ones. As evidenced by our results, all CS outperform SSS in terms of accuracy and inference time. Further, compared to MSS, the CS has hardware advantages.

Zusammenfassung

Qualitätsüberwachung ist in der Produktion unerlässlich, um fehlerhafte Elemente zu erkennen. Zum Stand der Technik gehören Single-Sensor-Systeme (SSS) und Multi-Sensor-Systeme (MSS). Diese Ansätze sind jedoch möglicherweise nicht geeignet: Heutzutage kann ein Bauteil mehrere hundert Schweißnaht-Meter umfassen, weshalb hohe Schweißgeschwindigkeiten erforderlich sind, um genügend Bauteile zu produzieren. Um möglichst viele Fehler rechtzeitig zu erkennen, ist eine schnelle und dennoch präzise überwachung erforderlich. Die von SSS erfassten Informationen sind möglicherweise nicht ausreichend und MSS leiden unter langen Inferenzzeiten. Daher stellen wir ein vertrauensbasiertes kaskadiertes System (CS) vor. Der Grundgedanke des CS ist, dass nicht alle Daten analysiert werden, um die Schweißnahtqualität zu erhalten, sondern nur ausgewählte Daten. Unsere Ergebnisse zeigen, dass alle CS das SSS in Bezug auf Genauigkeit und Inferenzzeit übertreffen. Außerdem hat das CS im Vergleich zum MSS Hardware-Vorteile.

1 Introduction

Laser welding is used in automotive [1, 2], aerospace [3], or shipbuilding [4] industries and is considered a key production technology [57]. Its ability for precise and fast welding is advantageous compared to other joining techniques. During a laser welding process, material is melted, vaporized and solidified. Unfortunately, this process often is challenging [8] and, hence, welding defects like false friend [9], humping [10], undercut [11] or dropout occur. Especially in the electric drive train, some components consist of several hundred laser-welded elements, and one welding defect can lead to the failure of the entire component. To quickly detect defective elements, quality monitoring is desired in industrial processes.

Two main sensors to evaluate the weld quality are photodiodes (PD) [1215] and spectrometers [16, 17] because of their simple structure. Other sensors are ultraviolet sensors [18], X-ray sensors [19, 20], microscopy, microphones [19], optical coherence tomography [21], and high-speed cameras (HSC) [2225]. Accordingly, compared to typical quality monitoring in resistance or ultrasonic welding [26, 27], data from more sensors exist.

The signals acquired by the different sensors are analyzed with processing algorithms. Thereby, data-driven process monitoring has been implemented by applying machine learning methods like support vector machines [2830], decision trees (DT) [31], random forest algorithms [32, 33], or Bayes classifiers [34]. Recently, deep learning has achieved success in image recognition and classification [35] and thus has been applied to weld defect inspection [25, 36, 37].

Some researchers use a single sensor type, or single sensor system (SSS), to study a mechanism of the welding process. In [13], the optical intensity captured by a PD when welding defects occur is analyzed. In [25], the results of quality assessment of HSC or optical coherence tomography data with neural networks (NN) like Inception-v3 [38] are presented. Moreover, different convolutional neural networks (CNN) like AlexNet [35], VGG-16 [39], or MobileNetV3-Large [40] are used for the automated optical inspection of a laser welding process [36].

However, information captured by one sensor is not sufficient for holistic quality assessment [41]. A combination of multiple sensor types, on the other hand, provides a more comprehensive description of the welding process [19]. Consequently, multi-sensor systems (MSS) use different sensors [19, 29, 30, 33, 37, 42]. For example, during high-power disc laser welding, a PD, a spectrometer, and an ultraviolet sensor can be used to detect different weld defects [43].

A quality monitoring system whose inference time is not longer than its cyclic time is desired because the production process is not slowed down. Yet, because of long inference times, complex MSS and processing algorithms can result in quality monitoring approaches that are not profitable in production. In other domains, similar problems were solved by two-stage classifiers, like for solder joint inspection [34, 44], surface-mounted devices inspection [45], or ball bonding [46].

The present paper extends our conference paper [47], which presented a cascaded system (CS) with the aim of fast but still precise quality monitoring. The CS follows a two-stage structure: The first level of inspection analyzes simple data like time series, with the advantage of a high clock rate and low memory requirements. This stage already safely classifies some welds; in areas of uncertainty, however, the next more complex stage, which may use image data, takes over to make a final decision. The extensions include:

  1. usage of deep NN architectures like CNN or a multilayer perceptron (MLP) for PD time series in the first stage of the CS and Inception-v3 [38] or ResNet50 [48] for HSC images in the second stage,

  2. usage of classical machine learning algorithms including feature engineering and DT with the advantage of more interpretable results,

  3. comparison of 16 NN-based CS and 4 DT-based CS, so in total 20 different CS as well as their corresponding SSS and MSS, and

  4. Pareto optimization with respect to the accuracy and the inference time for quality monitoring approach design at given conditions.

2 Data set

The data were acquired in the laboratory. Photodiode (PD) signals and synchronous high-speed camera (HSC) images were captured during the welding of thin metal plates. Subsequently, the data were preprocessed so that they can be used by the quality monitoring approaches described in Section 3.

2.1 Experimental setup

Figure 1 shows the experimental setup. Thereby, a laser beam generated in the laser was directed onto two thin metal plates by a 2D galvanometer scanner using two mirrors. As a laser, an IPG YLR-1000-WC-Y14 fiber laser with an infrared wavelength of 1070 nm at a power of 250 W was used. As a scanner, a Scanlab intelliWELD PR, which focused the beam to a diameter of 45 µm on the workpiece surface, was used. Two measurement systems observed the laser welding process coaxially and in-process: Firstly, a PD (FEMTO LCA-S-400K) measured light with a wavelength of 300–950 nm in the area of the weld pool. Accordingly, at each sampling time, the voltage of the PD amplifier was recorded. Secondly, HSC (Optronics Cyclone-1HS-3500) images were recorded.

Figure 1: 
Experimental setup of the laser welding process of two thin metal plates. Two measurement systems were used: a photodiode (PD) and a high-speed camera (HSC).
Figure 1:

Experimental setup of the laser welding process of two thin metal plates. Two measurement systems were used: a photodiode (PD) and a high-speed camera (HSC).

2.2 Experiments

As shown in Figure 1, two identical metal plates with a length of 80 mm, a width of 40 mm and a thickness of 75 µm were welded together in an overlap joint. The plates were clamped and full penetration welded in a rectangular geometry. After welding, the two plates should be joined along the rectangular path. The welding process for one metal plate pair took 338 ms. In total, 59 pairs were welded. Thereby, 9 were welded under reference conditions, whereas anomalies like spatters or gaps were inserted in 50.

Figure 2 shows the data and the sampling rates of the two observing sensors. Thereby, the axis indicates the time in µs. Above the axis, the PD time series is plotted. The PD had a sampling rate of 250 kHz, so every 4 µs one sample was recorded. Below the time axis, the HSC images are shown. The HSC had a sampling rate of 20 kHz, resulting in one image every 50 µs. The higher sampling rate of the PD allows the detection of shorter anomalies. Moreover, faster processing is possible because of the smaller amount of raw data compared to the HSC images. However, the HSC images provide information that is not in the PD signals, such as information about the interaction zone geometry because the PD signals are spatially aggregated signals, whereas the HSC images have a spatial resolution. Moreover, Figure 2 shows reference and anomaly data. On the left side (green box), the process was welded under reference conditions. On the right side (red box), oil was inserted between the metal plates, leading to an anomaly. Inserting the oil resulted in a difference in the amplitude in the PD time series and in a slight difference in the interaction zone geometry in the HSC images.

Figure 2: 
Data and sampling rates of the PD and the HSC. Thereby, reference and anomaly data are indicated by the green and red boxes; the assignment of the PD and HSC data as chunks is given by the gray boxes.
Figure 2:

Data and sampling rates of the PD and the HSC. Thereby, reference and anomaly data are indicated by the green and red boxes; the assignment of the PD and HSC data as chunks is given by the gray boxes.

2.3 Data preprocessing

The grayscale HSC images were cropped to a size of 100 × 100 pixels and scaled to a value range of [0,1]. Then, each HSC image was assigned to 13 PD samples, representing a chunk. These chunks resulted from the sampling rates of the sensors and are visualized in Figure 2 by the gray boxes. Next, to each chunk a label, namely reference or anomaly, was assigned. By such chunk-wise labeling, defect localization along the welding path is possible, because different locations on one metal plate pair have different labels. An anomaly refers to those locations, where an abnormality like a gap or spatters was introduced. Reference refers to the locations, where neither an abnormality was provoked nor visible in the recorded PD signals or HSC images. Accordingly, Table 1 shows the number of chunks of the data set. For the evaluation of the models, 5-fold cross-validation was applied. Thereby, the training and test splits are based on whole metal plate pairs. Such a split is closer to the production scenario as algorithms are trained on data of some workpieces and then applied to data of other ones. However, a random split would lead to a more similar distribution of the training and test data. Moreover, to have metal plate pairs of each error case in each fold, a division of the data into 5 folds is reasonable. The average number of training chunks over the 5 folds is given by n train, the average number of test chunks by n test, and both together as n. To teach the models invariances and robustness properties, a data augmentation on the HSC images including rotation and to flipping was used. Accordingly, the average chunk number with data augmentation is given by n DA.

Table 1:

Number of chunks of the data set. Given are the average numbers over the 5 folds of the cross-validation.

Label n train n test n n DA
Reference 148 425 35 760 184 185 1 605 716
Anomaly 147 368 38 188 185 556 1 594 284
Total 295 793 73 948 369 741 3 200 000

3 Quality monitoring approaches

Three different quality monitoring approaches, namely single-sensor system (SSS), multi-sensor system (MSS), and cascaded system (CS) are considered. In general, the approaches map measurement values X i to a quality-relevant quantity Y i . Before the approaches are defined, the formal definitions of the data are given:

Each out of n PD time series is defined as

X PD , i = ( x i , 1 x i , 2 x i , 13 ) ,

where the first index i ∈ {1, …, n} indicates the chunk number and the second index the sample number within that chunk. The images of the HSC are defined as

X HSC , i = x i , 1,1 x i , 1,2 x i , 1,100 x i , 2,1 x i , 2,2 x i , 2,100 x i , 100,1 x i , 100,2 x i , 100,100 .

With the label Y i ∈ {0, 1}, where 0 indicates anomaly and 1 reference, the data set S consists of n = 369 741 (see Table 1) triples according to

S = { ( X PD, 1 , X HSC, 1 , Y 1 ) , , ( X PD, n , X HSC, n , Y n ) } .

3.1 Single-sensor system

A SSS performs process monitoring based on data coming from one sensor. For the PD and HSC data as input, the SSS are defined by the functions

f SSS,PD : R 1 × 13 { 0,1 } : X PD, i Y i , f SSS,HSC : R 100 × 100 { 0,1 } : X HSC, i Y i .

Figure 3 shows the two SSS: The one with PD signals as input (left side) and the one with HSC images as input (right side). Thereby, f ̂ SSS,PD and f ̂ SSS,HSC indicate optimized prediction models. Each prediction model consists of two blocks: A features block to extract the most important features and a classification block to determine the weld quality. Concrete algorithms for each block, namely DT and NN, are given in Section 4.

Figure 3: 
Two single-sensor systems (SSS). Left side: SSS with X
PD,i
 as input and 






f

̂



SSS,PD




${\hat{f}}_{\text{SSS,PD}}$



 as optimized prediction model. Right side: SSS with X
HSC,i
 as input and the optimized prediction model 






f

̂



SSS,HSC




${\hat{f}}_{\text{SSS,HSC}}$



.
Figure 3:

Two single-sensor systems (SSS). Left side: SSS with X PD,i as input and f ̂ SSS,PD as optimized prediction model. Right side: SSS with X HSC,i as input and the optimized prediction model f ̂ SSS,HSC .

3.2 Multi-sensor system

A MSS uses data from multiple sensors. A MSS consisting of PD and HSC data as input is expressed by

f MSS : R 1 × 13 × R 100 × 100 { 0,1 } : X PD , i × X HSC , i Y i .

Figure 4 shows a MSS with its prediction model f ̂ MSS . The MSS consists of two feature blocks, namely one for each sensor data, and one classification block, where the features are fused and processed for prediction. Thereby, the classification block has the algorithm structure like the one of the SSS,HSC. However, the difference is that the input dimension changes because PD features are also processed.

Figure 4: 
Multi-sensor system (MSS). It has X
PD,i
 and X
HSC,i
 as input and 






f

̂



MSS




${\hat{f}}_{\text{MSS}}$



 as prediction model.
Figure 4:

Multi-sensor system (MSS). It has X PD,i and X HSC,i as input and f ̂ MSS as prediction model.

3.3 Cascaded system

The advantage of MSS compared with SSS is a more holistic quality assessment, however, the inference time of MSS is greater. Therefore, the CS combines the advantages of both systems. It offers the possibility to use multiple sensors as a MSS. In contrast to MSS, not all data are analyzed to obtain the weld quality; only selected ones. Figure 5 shows a two-stage CS. Let p ∈ [0, 1] be the output of a classifier that predicts based on PD signals. For p < 0.5 the classifier chooses anomaly; for p ≥ 0.5 reference. The closer p is to 0 or 1, the more confident the classifier’s decision is considered. Let r ∈ (0, 0.5) be a fixed threshold. A classifier’s decision is certain if p < r or p > 1 − r. If the first condition is satisfied, the classifier is certain for anomaly; if the second is satisfied, it is certain for reference. If the classifier is certain, its result is accepted; if not, a final decision is made in a next step based on the HSC data. Formally, the CS is expressed by

f CS : f SSS,PD , if p < r or p > 1 r f SSS,HSC , otherwise .

Figure 5: 
Cascaded system (CS). Depending on the certainty, which results from p and r, it uses only X
PD,i
, or X
PD,i
 and X
HSC,i
 as input. The prediction model is 






f

̂



CS




${\hat{f}}_{\text{CS}}$



.
Figure 5:

Cascaded system (CS). Depending on the certainty, which results from p and r, it uses only X PD,i , or X PD,i and X HSC,i as input. The prediction model is f ̂ CS .

4 Implementation details

The prediction models f ̂ SSS,PD , f ̂ SSS,HSC , f ̂ MSS , and f ̂ CS introduced in Section 3 were created by combining four blocks: Features PD, Features HSC, Classification PD, and Classification HSC (see Figures 3 5). Building the approaches with same blocks ensures a fair comparison. For each of the four blocks, different algorithms were implemented. Thereby, on one hand, blocks consist of classical machine learning methods, namely feature engineering and decision trees (DT), and, on the other hand, of neural networks (NN). The algorithm structure of the blocks are given in Tables 2 and 3. Thereby, one feature and classification block form a configuration. In total, two DT- and four NN-based configurations for each SSS,PD and SSS,HSC were considered. The MSS and CS result from combining each DT-configuration of Table 2 with each DT-configuration of Table 3 resulting in 22 = 4 different DT-based MSS and CS. Analogously, each NN-configuration of Table 2 was combined with each NN-configuration of Table 3, resulting in 24 = 16 different NN-based MSS and CS. Consequently, in total, 20 MSS and 20 CS were considered. The concrete algorithms of each configuration are explained in the following. Thereby, hyperparameter optimization, namely a grid search, and feature selection was conducted on the training data.

Table 2:

Configurations processing PD signals. Each configuration consists of a Features PD and a Classification PD block. The DT-based configurations are explained in Section 4.1 and the NN-based in Section 4.2.

Configuration Features PD Classification PD
DT1 Manual stat. d = 8, s = 200
DT2 tsfresh [49] d = 8, s = 200
CNN1 C126-C127-C128-C126 F-D26-P-D24-P-D23-P-D1
CNN2 [47] C123-C124-C124-C123 F-D24-P-D23-P-D22-P-D1
CNN3 C15-C15-C15 F-D1
MLP D5-D5-D5-D1
Table 3:

Configurations processing HSC images. Each configuration consists of a Features HSC and a Classification HSC block. The DT-based configurations are explained in Section 4.1 and the NN-based in Section 4.2.

Configuration Features HSC Classification HSC
DT3 Manual stat.+geo. d = 16, s = 200
DT4 Manual stat.+geo. d = 32, s = 2000
MN [47] MobileNet [50]a F-D28-P-D28-P-D27-P-D1
IV3 Inception-v3 [38]a F-D28-P-D28-P-D27-P-D1
RN50 ResNet50 [48]a F-D28-P-D28-P-D27-P-D1
CNN4 C225-C226-C227-C226 F-D26-P-D25-P-D24-P-D1
  1. aOnly part before dense layers.

4.1 Feature engineering and decision trees

In the following, configurations DT1–DT4 from Tables 2 and 3 are explained. Features from the PD signals were extracted in two different ways: manually and automated with the python toolbox tsfresh [49]. For the manual feature extraction seven statistical (stat.) features were calculated on every of the 13 samples in a chunk of the PD data. The seven features were: mean, standard deviation, maximum, minimum, distance between maximum and minimum, kurtosis, and skewness. To not have overlooked any important information, also an automated feature extraction using the python toolbox tsfresh was used. The toolbox automatically calculates 794 time series features.

From each HSC image, the following statistical and geometrical (geo.) features were manually selected: Firstly, the seven statistical features as calculated on the PD signal chunks were calculated for each HSC image as well as the median. Secondly, depending on a threshold h ∈ [0, 255], eleven geometrical features were calculated. Therefore, a binary mask image with the same dimension as the HSC images was calculated for every HSC image. The mask has the value 1 at those locations where the pixel values in the HSC image are greater than h or equal, and 0 where the HSC image pixel values are smaller. Depending on h, the binary mask has information about for example the keyhole size, the keyhole shape, or spatters. Based on the binary mask, the following eleven features were extracted: area, number of regions, area of the biggest region, ratio of area of the biggest region and area, convex hull, ratio of area and convex hull, circumference, ratio of circumference and area, area of a fitted ellipse, length of the ellipse and width of the ellipse. Fifteen different empirical thresholds h were considered. This results in 11·15 = 165 geometrical features per image. Together with the eight statistical features, a total of 173 features of each HSC image were considered.

After feature extraction, classification was done with DT. The DT were implemented with the library scikit-learn [51]. The library uses an optimized version of the CART algorithm. To measure the quality of a split, the Gini impurity was used as criterion. A grid search to find good hyperparameters, namely the maximum depth of the tree d and the minimum number of chunks required to split an internal node s, was performed. The specific values of the hyperparameters d and s can be taken from Tables 2 and 3. To obtain the most relevant features, the importance of each feature based on the Gini importance was calculated. Reducing to the most important features highly decreases the time to build the tree. Only those features were selected to build the final DT that were at least under the ten most relevant features in one of the trees of the 5-fold cross-validation.

4.2 Neural networks

In the following, all other configurations from Tables 2 and 3, namely CNN1–CNN4, MLP, MN, IV3, and RN50, are described. All consist of different NN. Thereby, the blocks Features PD and Features HSC consist of convolution layers whereas the blocks Classification PD and HSC consist of dense layers. The concrete architectures are given in Tables 2 and 3.

The Feature PD blocks of CNN1–CNN3 consist of C1 k. C1 k describes a 1D-convolution layer with a filter size of 1 × 3 followed by batch normalization, and a ReLU as activation function. k N indicates the number of filters. 1D-convolutions were used on the PD time series because the input is 1-dimensional. CNN4 consists of C2 k. C2 k describes a 2D-convolution layer with a filter size of 3 × 3 followed by batch normalization, max pooling, and a ReLU as activation function. MN, IV3, and RN50 use the convolution part of the existing architectures MobileNet [50], Inception-v3 [38], and ResNet50 [48], respectively.

Regarding the classification blocks, F describes a flatten layer. Dl describes a dense layer with l N neurons. Thereby, a ReLU activation function is used if l ≠ 1 and a Sigmoid activation function if l = 1. P describes a dropout layer with a dropout rate of 0.5.

During training of the NN, the binary cross-entropy was used as loss function. An Adam optimizer with a learning rate l r = 5·10−5 and the regularizers β 1 = 0.9 and β 2 = 0.99 was used. The NN were trained with a batch size of 32 with 1000 steps per epoch for 100 epochs and with the same seed for better performance comparison. All NN were initialized with random weights. Additionally, 5-fold cross-validation was used to obtain the final results. The NN were implemented in Python using Keras [52] and Tensorflow [53]. The training processes ran on an NVIDIA A40 GPU card. In total 120 NN were trained (taking into account the 5-fold cross-validation so 24 NN in each fold): 20 f ̂ SSS,PD with a training time of 3–6 min each, 20 f ̂ SSS,HSC and 80 f ̂ MSS with a training time of 2–4 h each.

4.3 Certainty of the cascaded system

Besides the four blocks Features PD, Features HSC, Classification PD and Classification PD, the CS consists of a request if the PD classification result is certain or not (see Figure 5). For the 16 CS consisting of NN, the output of the NN before binarization is used. This output is a value between 0 and 1 because of the binary classification. Model calibration is not necessary, as a certainty ranking of the chunks is sufficient. However, if interpretable r and p values as probabilities are needed, model calibration like plat scaling or isotonic regression would be necessary. For the 4 CS consisting of DT-based methods, the certainty is given by the fraction of the chunk belonging to the same class in a leaf. For example, if all chunks in a leaf belong to anomaly or reference, the certainty is considered as 0 or 1, respectively. If half of the chunks in a leaf belong to reference and the other half to anomaly, then the certainty value for the chunks in that leaf would be 0.5.

4.4 Inference time

The quality monitoring approaches are compared with respect to the accuracy and the inference time. In [47] the number of parameters is used to estimate the inference time. However, only NN have been considered. The present paper deals with different types of algorithms, namely feature calculations, DT and NN, so that parameter comparison is not feasible. Therefore, the inference time on the same hardware, a CPU (AMD EPYC 7543 32-core processor), is used. Thereby, the mean inference time per chunk of the test chunks was calculated. The mean evaluation time is used for the following reason: In production, the cyclic time determines the evaluation time to remain real-time capable. If the evaluation time is limited to the cyclic time for each element, there probably are elements with shorter evaluation times and elements with longer evaluation because of the cascaded structure of the CS. A shorter evaluation means that there is time when no evaluation takes place. Longer evaluation needs to be interrupted when the cyclic time is reached. In contrast, if evaluation takes place with an offset time t o after the welding process, then the mean evaluation time per element over the time t o must be shorter than the cyclic time. The advantage is that evaluation times shorter than the cyclic time of one element allow longer evaluation times for other elements. It must be mentioned that the implementation of the algorithms is not optimized yet, and the inference times are considered as a rough indication.

5 Results and discussion

Figure 6 shows the results of the different approaches. Thereby, the accuracy is shown over the inference time. The accuracy is given in % on a linear scale, while the inference time is given in ms on a logarithmic scale. In the graph, yellow symbols represent SSS,PD, red symbols SSS,HSC, and blue symbols MSS. Gray lines indicate the performances of CS depending on the threshold r. Thereby, the threshold r was changed in steps of 0.005. Each black symbol lies on one of the lines and indicates the CS with the highest accuracy. While the color of a symbol characterizes the approach, the symbol’s shape stands for the configuration (see Tables 2 and 3). The MSS and CS symbols (blue and black) consist of two parts: an inner and outer symbol. The inner symbol indicates the Features PD and Classification PD block. The outer symbol indicates the Features HSC and Classification HSC blocks (see Figures 4 and 5). As shown in Figure 4, the MSS does not consist of a Classification PD block. Therefore, for MSS, the inner symbol refers only to the Features PD block. The exact values of the accuracy and the inference time are given in Table 4. Additionally, Table 4 includes precision, recall, and F1-score with standard deviations.

Figure 6: 
Performance of the approaches and their configurations with respect to the accuracy and the inference time. The four approaches are marked by the four colors: yellow, red, blue and black. The configurations are marked by the symbols. The configurations of SSS,PD and SSS,HSC (see Tables 2 and 3) are written next to the corresponding symbol. The symbols of the MSS and the CS result from combining the SSS-configurations. All symbols with their corresponding configurations can also be seen in Table 4. The gray lines indicate the performance of the CS depending on the threshold r, where the CS-symbols indicate the maximum value on each line.
Figure 6:

Performance of the approaches and their configurations with respect to the accuracy and the inference time. The four approaches are marked by the four colors: yellow, red, blue and black. The configurations are marked by the symbols. The configurations of SSS,PD and SSS,HSC (see Tables 2 and 3) are written next to the corresponding symbol. The symbols of the MSS and the CS result from combining the SSS-configurations. All symbols with their corresponding configurations can also be seen in Table 4. The gray lines indicate the performance of the CS depending on the threshold r, where the CS-symbols indicate the maximum value on each line.

Table 4:

Results of each configuration of each approach with 5-fold cross-validation including accuracy, precision, recall, F1-score, and inference time. Thereby, of each approach, the maximal accuracy, precision, recall, and F1-score as well as the minimal inference time are marked in bold. The best DT-configurations of each approach are marked in and the best NN-configurations in black bold. For the accuracy, precision, recall, and F1-score the standard deviation is also given. The entries of the CS result from where the accuracy of each CS-configuration has its maximum.

5.1 Single-sensor systems photodiode

Among the SSS,PD-configurations, CNN1 has the highest accuracy of 89.84 %, whereas MLP has the lowest of 85.98 %. As explained, the input of SSS,PD only consists of 13 samples. For that few data, CNN1, consisting of a NN with 229 281 parameters, is big. Since only 13 samples are processed, it could be expected that small NN like MLP with 136 parameters would perform accordingly. However, it seems like, complex architectures are beneficial for low-dimensional input data. Out of the NN-SSS, PD-configurations, CNN1 leads to the highest inference time of 0.700 ms and MLP to the lowest of 0.456 ms.

Among the DT-based configurations, DT2, which included automated feature extraction, performs 0.48 % better than DT1, which manually extracted seven features. Because of the slight difference, both systems are competitive. However, since in DT2 more features were extracted, the inference time increases by 0.542 ms.

Comparing the NN- and DT-based SSS,PD-configurations, the best DT-configuration DT2 performs by only 0.09 % worse than the best NN-configuration CNN1. Moreover, DT1 and DT2 perform better than CNN2, CNN3 and MLP with respect to the accuracy. Therefore, we conclude that DT-based methods are competitive with deep-learning-based methods. Deep NN are able to classify without process knowledge. However, classification results of NN are hardly interpretable. In contrast, DT-based methods invest in feature engineering where process knowledge is useful. Therefore, classification results from DT are easier to interpret and additional process information could be found.

5.2 Single-sensor systems high-speed camera

Comparing SSS,HSC with SSS,PD, all SSS,HSC lead to higher accuracies. The reason is that SSS,HSC analyze images that have more information than 13 samples used in SSS,PD. By looking at the SSS,PD symbols (yellow) and the SSS,HSC symbols (red) in Figure 6, the trade-off between accuracy and inference time is visible: On one hand, all SSS,PD have lower inference times; on the other hand, all SSS,HSC have higher accuracies.

Among the NN-based SSS,HSC, RN50 achieves the highest accuracy of 96.50 %. In general, the accuracy of the NN-based configurations increases when the model size increases as well: From CNN4 to MN to IV3 to RN50. However, since RN50 is the largest model with 32 075 393 parameters, the inference time also is the highest with 28.491 ms. The DT-based SSS,HSC perform at the lower end regarding the accuracy. However, especially DT3 with an accuracy of 93.76 % is competitive with CNN4.

5.3 Multi-sensor systems

MSS mostly result in higher accuracies than the corresponding SSS,HSC. This might lead to the conclusion, that some information in the PD signals is not in the HSC images. On closer inspection, however, the higher the accuracy of the SSS,HSC is, the lower the benefit when using the PD signals additionally. As the benefit of the MSS for complex algorithms is small, we conclude that the main information of the PD signals is included in the HSC images and can be found by using complex processing algorithms. However, when having limited inference times, our results show that the use of both sensors is reasonable.

5.4 Cascaded systems

Every CS is able to outperform its corresponding SSS,HSC with respect to accuracy and inference time (see the red and black symbols and gray lines in Figure 6). By combining two SSS in the proposed cascaded way, it could be that every chunk SSS,PD classifies correctly, SSS,HSC also classifies correctly. The difference would be that SSS,HSC also correctly classifies chunks that SSS,PD cannot. However, this is not the case: Since the accuracies of the CS increase compared with SSS,HSC, SSS,PD sometimes performs better than SSS,HSC. The proposed CS can exploit this effect. Therefore, we conclude that the CS can reduce the inference time with the same or even increased accuracy compared to the corresponding SSS,HSC.

With the help of Figure 6, depending on the conditions of the production process, a suitable system can be chosen. If the inference time cannot be longer than 1 ms, the optimal solution out of all systems would be the CS that combines CNN3 and CNN4 with an accuracy of 92.92 %. This optimal solution can only be seen in Figure 6: It is where the vertical line at an inference time of 1 ms crosses the highest gray line of a CS. Consequently, the evaluations in the second stage so the evaluations of the HSC images of the CS are reduced. Therefore, by a suitable choice of r, our proposed CS leads to the best performance for a given inference time. Analogously, if the inference time is limited to 2 ms, suitable solutions would be the CS that combines CNN1 and MN with an accuracy of 94.40 % or the CS that combines CNN1 and CNN4 with an accuracy of 94.44 %.

The sampling rates of the sensors are shorter than the inference times. This is because the algorithms and the hardware are not optimized yet. Additionally, the input data could be preprocessed more efficiently. For example, several HSC images could be aggregated into one. Classification then could be performed based on the aggregated image. On one hand, this has the advantage of shorter evaluation times and, on the other hand, a larger neighborhood is considered. The latter is useful since weld defects are sometimes visible in several consecutive images.

In general, CS are worthwhile, especially for short inference times. For longer inference times MSS outperform. However, from a hardware perspective, CS could be performed on two separate configurations at each sensor where only one bit has to be exchanged between the two configurations, indicating if the SSS,PD is sure or not. In contrast, when performing MSS, all data have to be transferred to a common hardware to process them together.

5.5 Friedman and Nemenyi test

To compare the performance of the different SSS,PD and SSS,HSC models, a Friedman test with a significance level of 0.05 was performed. The Friedman test shows that there is a significant difference between the SSS,PD and SSS,HSC models, respectively. Therefore, in a next step, a Nemenyi test with the same significance level as the Friedman test was performed. This led to the result that for the SSS,PD models there is a significant difference between configuration MLP and CNN1. For the SSS,HSC models, there is a significant difference between DT4 and IV3, DT4 and RN50, and DT3 and RN50.

6 Summary and outlook

A confidence-based cascaded system (CS) as a quality monitoring approach for a laser welding process is presented. The key idea of the CS is that not all data are analyzed to obtain the quality weld, but only selected ones. The CS is compared with state-of-the-art quality monitoring approaches, namely single-sensor systems (SSS) and multi-sensor systems (MSS). For every approach, various algorithms consisting of feature engineering and decision trees (DT) or neural networks (NN) only are compared in terms of the accuracy and the inference time since fast but still precise quality monitoring is needed in today’s quality monitoring of welds. Thereby, all CS are able to outperform SSS in terms of accuracy and inference time. Depending on the conditions of a production process an optimal CS can be chosen. Thereby, especially for short inference times the CS is the best choice among the approaches. Regarding the algorithms, classical machine learning using feature engineering and DT results in competitive results to NN. Thereby, results coming from classical machine learning are easier to interpret but can lead to time investment in feature engineering. Moreover, compared to MSS, the presented CS has the advantage, that data of different sensors do not have to be transferred to common hardware. However, when transferred to common hardware, the MSS could be executed in parallel, while the CS must be executed sequentially.

Further work can deal with a general cascaded system [47] for quality monitoring, which arbitrarily combines any sensors, to obtain a quality assessment. As multi-modal systems work well under information fusion, additional non-optical sensors like acoustical sensors could be added. Moreover, further optical sensors like microscopy could be used after the welding process. While PD and HSC provide information during the welding process like about the keyhole and the molten metal, microscopy images of the solidified weld seam could be used additionally. Furthermore, to improve the CS, classifiers in lower stages could only be trained on the subset of data that are not already confidently classified by the higher stages. However, this would require to train a new classifier for each r value. Additionally, sensors could deliver conflicting data. Therefore, it would be interesting to analyze what proportion of the error rate of a CS is due to an incorrect and overconfident decision of the first stage, meaning the second stage would have corrected that misclassification, and what proportion is also misclassified by the second stage. Moreover, the quality monitoring approaches can be extended to distinguish not only between anomaly and reference but also between different anomalies.


Corresponding author: Patricia M. Dold, Bosch Research, Robert Bosch GmbH, Robert-Bosch-Campus 1, 71272 Renningen, Germany; and Institute for Automation and Applied Informatics (IAI), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany, E-mail:

About the authors

Patricia M. Dold

Patricia M. Dold received the B.S. degree and the M.S. degree in electrical engineering and information technology from the Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, in 2019 and 2022, respectively. In 2020, she spent one semester at the Polytechnic University of Valencia (UPV), Valencia, Spain. Currently, her research at Bosch Research, Renningen, Germany, and the Institute for Automation and Applied Informatics (IAI), KIT, Eggenstein-Leopoldshafen, Germany, focuses on distributed multi-sensor quality monitoring of laser-based processes for intelligent production.

Fabian Bleier

Dr. Fabian Bleier is a research engineer in the department of Advanced Production Technologies at Bosch Research in Renningen. His research focuses on the integration of data analytics in manufacturing environments from edge to cloud with a special interest in real-time capable data analytics on resource-restricted devices.

Meiko Boley

Dr. Meiko Boley is a research engineer and works at Bosch Research in Renningen. He wrote his Ph.D. about using optical coherence tomography in laser welding. His current work is centered around process monitoring of laser-based processes in production.

Ralf Mikut

apl. Prof. Dr. Ralf Mikut received the Dipl.-Ing. degree in automatic control from the University of Technology, Dresden, Germany, in 1994, and the Ph.D. degree in mechanical engineering from the University of Karlsruhe, Karlsruhe, Germany, in 1999. Since 2011, he has been an Adjunct Professor at the Faculty of Mechanical Engineering and the head of the research field “Automated Image and Data Analysis”. He is leading the research group “Machine Learning for Time Series and Images” at the Institute for Automation and Applied Informatics of the Karlsruhe Institute of Technology (KIT), Germany. His current research interests include machine learning, image processing, life science applications and smart grids.

  1. Research ethics: Not applicable.

  2. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission. We describe the individual contributions of Patricia M. Dold (PMD), Fabian Bleier (FB), Meiko Boley (MB) and Ralf Mikut (RM) using CRediT [54]: Conceptualization: PMD, FB, MB, RM; Methodology: PMD; Software: PMD; Formal Analysis: PMD; Investigation: PMD, MB; Data curation: PMD; Writing – Original Draft: PMD; Writing – Review & Editing: PMD, FB, MB, RM; Supervision: FB, MB, RM; Project Administration: FB, MB, RM.

  3. Competing interests: The authors state no conflict of interest.

  4. Research funding: None declared.

  5. Data availability: Not applicable.

References

[1] G. Chen, L. Mei, M. Zhang, Y. Zhang, and Z. Wang, “Research on key influence factors of laser overlap welding of automobile body galvanized steel,” Opt Laser. Technol., vol. 45, pp. 726–733, 2013. https://doi.org/10.1016/j.optlastec.2012.05.002.Search in Google Scholar

[2] K. M. Hong and Y. C. Shin, “Prospects of laser welding technology in the automotive industry: a review,” J. Mater. Process. Technol., vol. 245, pp. 46–69, 2017. https://doi.org/10.1016/j.jmatprotec.2017.02.008.Search in Google Scholar

[3] K. Abderrazak, W. Ben Salem, H. Mhiri, P. Bournot, and M. Autric, “Nd: YAG laser welding of AZ91 magnesium alloy for aerospace industries,” Metall. Mater. Trans. B, vol. 40, pp. 54–61, 2009. https://doi.org/10.1007/s11663-008-9218-7.Search in Google Scholar

[4] K. Haug and G. Pritschow, “Robust laser-stripe sensor for automated weld-seam-tracking in the shipbuilding industry,” in IECON’98. Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society (Cat. No. 98CH36200), vol. 2, 1998, pp. 1236–1241.Search in Google Scholar

[5] M. M. Atabaki, N. Yazdian, J. Ma, and R. Kovacevic, “High power laser welding of think steel plates in a horizontal butt joint configuration,” Opt Laser. Technol., vol. 83, pp. 1–12, 2016. https://doi.org/10.1016/j.optlastec.2016.03.016.Search in Google Scholar

[6] S. Pang, X. Chen, J. Zhou, X. Shao, and C. Wang, “3D transient multiphase model for keyhole, vapor plume, and weld pool dynamics in laser welding including the ambient pressure effect,” Opt. Lasers Eng., vol. 74, pp. 47–58, 2015. https://doi.org/10.1016/j.optlaseng.2015.05.003.Search in Google Scholar

[7] Y. Zhang, F. Li, Z. Liang, Y. Ying, Q. Lin, and H. Wei, “Correlation analysis of penetration based on keyhole and plasma plume in laser welding,” J. Mater. Process. Technol., vol. 256, pp. 1–12, 2018. https://doi.org/10.1016/j.jmatprotec.2018.01.032.Search in Google Scholar

[8] C. Alippi, P. Braione, V. Piuri, and F. Scotti, “A methodological approach to multisensor classification for innovative laser material processing units,” in Proceedings of the 18th IEEE Instrumentation and Measurement Technology Conference (I2MTC), vol. 3, 2001, pp. 1762–1767.Search in Google Scholar

[9] K. Hellera, S. Kesslera, F. Dorscha, P. Berger, and T. Graf, “Robust “false friend” detection via thermographic imaging,” in Lasers in Manufacturing Conference 2015, 2015.Search in Google Scholar

[10] J. Powell, T. Ilar, J. Frostevarg, et al.., “Weld root instabilities in fiber laser welding,” J. Laser Appl., vol. 27, no. S2, p. S29008, 2015. https://doi.org/10.2351/1.4906390.Search in Google Scholar

[11] J. Frostevarg and A. F. H. Kaplan, “Undercuts in laser arc hybrid welding,” Phys. Procedia, vol. 56, pp. 663–672, 2014. https://doi.org/10.1016/j.phpro.2014.08.071.Search in Google Scholar

[12] A. G. Paleocrassas and J. F. Tu, “Inherent instability investigation for low speed laser welding of aluminum using a single-mode fiber laser,” J. Mater. Process. Technol., vol. 210, no. 10, pp. 1411–1418, 2010. https://doi.org/10.1016/j.jmatprotec.2010.04.002.Search in Google Scholar

[13] A. Molino, M. Martina, F. Vacca, et al.., “FPGA implementation of time–frequency analysis algorithms for laser welding monitoring,” Microprocess. Microsyst., vol. 33, no. 3, pp. 179–190, 2009. https://doi.org/10.1016/j.micpro.2008.11.001.Search in Google Scholar

[14] S. S. Rodil, R. A. Gómez, J. M. Bernández, F. Rodríguez, L. J. Miguel, and J. R. Perán, “Laser welding defects detection in automotive industry based on radiation and spectroscopical measurements,” Int. J. Adv. Des. Manuf. Technol., vol. 49, pp. 133–145, 2010. https://doi.org/10.1007/s00170-009-2395-y.Search in Google Scholar

[15] G. Chianese, P. Franciosa, J. Nolte, D. Ceglarek, and S. Patalano, “Characterization of photodiodes for detection of variations in part-to-part gap and weld penetration depth during remote laser welding of copper-to-steel battery tab connectors,” J. Manuf. Sci. Eng., vol. 144, no. 7, p. 071004, 2022. https://doi.org/10.1115/1.4052725.Search in Google Scholar

[16] F. Kong, J. Ma, B. Carlson, and R. Kovacevic, “Real-time monitoring of laser welding of galvanized high strength steel in lap joint configuration,” Opt Laser. Technol., vol. 44, pp. 2186–2196, 2012. https://doi.org/10.1016/j.optlastec.2012.03.003.Search in Google Scholar

[17] P. B. García-Allende, J. Mirapeix, O. M. Conde, A. Cobo, and J. M. López-Higuera, “Spectral processing technique based on feature selection and artificial neural networks for arc-welding quality monitoring,” NDT&E Int., vol. 42, no. 1, pp. 56–63, 2009. https://doi.org/10.1016/j.ndteint.2008.07.004.Search in Google Scholar

[18] M. Thornton, L. Han, and M. Shergold, “Progress in NDT of resistance spot welding of aluminium using ultrasonic C-scan,” NDT&E Int., vol. 48, pp. 30–38, 2012. https://doi.org/10.1016/j.ndteint.2012.02.005.Search in Google Scholar

[19] K. Wasmer, T. Le-Quang, B. Meylan, et al.., “Laser processing quality monitoring by combining acoustic emission and machine learning: a high-speed X-ray imaging approach,” Procedia CIRP, vol. 74, pp. 654–658, 2018. https://doi.org/10.1016/j.procir.2018.08.054.Search in Google Scholar

[20] S. Shevchik, T. Le-Quang, B. Meylan, et al.., “Supervised deep learning for real-time quality monitoring of laser welding with X-ray radiographic guidance,” Sci. Rep., vol. 10, no. 1, p. 3389, 2020. https://doi.org/10.1038/s41598-020-60294-x.Search in Google Scholar PubMed PubMed Central

[21] M. Baader, A. Mayr, T. Raffin, J. Selzam, A. Kühl, and J. Franke, “Potentials of optical coherence tomography for process monitoring in laser welding of hairpin windings,” in 11th International Electric Drives Production Conference (EDPC), 2021, pp. 1–10.10.1109/EDPC53547.2021.9684210Search in Google Scholar

[22] S. Tsukamoto, “High speed imaging technique Part 2 – high speed imaging of power beam welding phenomena,” Sci. Technol. Weld. Joining, vol. 16, no. 1, pp. 44–55, 2011. https://doi.org/10.1179/136217110x12785889549949.Search in Google Scholar

[23] M. Jäger, S. Humbert, and F. A. Hamprecht, “Sputter tracking for the automatic monitoring of industrial laser-welding processes,” IEEE Trans. Ind. Electron., vol. 55, pp. 2177–2184, 2008. https://doi.org/10.1109/tie.2008.918637.Search in Google Scholar

[24] M. Jäger and F. A. Hamprecht, “Principal component imagery for the quality monitoring of dynamic laser welding processes,” IEEE Trans. Ind. Electron., vol. 56, pp. 1307–1313, 2009. https://doi.org/10.1109/tie.2008.2008339.Search in Google Scholar

[25] J. Vater, M. Pollach, C. Lenz, D. Winkle, and A. Knoll, “Quality control and fault classification of laser welded hairpins in electrical motors,” in 28th European Signal Processing Conference (EUSIPCO), 2017, pp. 1377–1381.Search in Google Scholar

[26] B. Zhou, T. Pychynski, M. Reischl, E. Kharlamov, and R. Mikut, “Machine learning with domain knowledge for predictive quality monitoring in resistance spot welding,” J. Intell. Manuf, vol. 33, no. 4, pp. 1139–1163, 2022. https://doi.org/10.1007/s10845-021-01892-y.Search in Google Scholar

[27] E. B. Schwarz, F. Bleier, F. Guenter, R. Mikut, and J. P. Bergmann, “Improving process monitoring of ultrasonic metal welding using classical machine learning methods and process-informed time series evaluation,” J. Manuf. Process., vol. 77, pp. 54–62, 2022. https://doi.org/10.1016/j.jmapro.2022.02.057.Search in Google Scholar

[28] T. S. Yun, K. J. Sim, and H. J. Kim, “Support vector machine-based inspection of solder joints using circular illumination,” Electron. Lett., vol. 36, no. 11, p. 1, 2000. https://doi.org/10.1049/el:20000342.10.1049/el:20000342Search in Google Scholar

[29] D. You, X. Gao, and S. Katayama, “Multisensor fusion system for monitoring high-power disk laser welding using support vector machine,” IEEE Trans. Ind. Inform., vol. 10, no. 2, pp. 1285–1295, 2014. https://doi.org/10.1109/tii.2014.2309482.Search in Google Scholar

[30] Z. Zhang and S. Chen, “Real-time seam penetration identification in arc welding based on fusion of sound, voltage and spectrum signals,” J. Intell. Manuf, vol. 28, no. 1, pp. 207–218, 2017. https://doi.org/10.1007/s10845-014-0971-y.Search in Google Scholar

[31] X. Hongwei, Z. Xianmin, K. Yongcong, and O. Gaofei, “Solder joint inspection method for chip component using improved AdaBoost and decision tree,” IEEE Trans. Compon. Packag. Manuf. Technol., vol. 1, no. 12, pp. 2018–2027, 2011. https://doi.org/10.1109/tcpmt.2011.2168531.Search in Google Scholar

[32] H. Wu, “Solder joint defect classification based on ensemble learning,” Solder. Surf. Mt. Technol., vol. 29, no. 3, pp. 164–170, 2017. https://doi.org/10.1108/ssmt-08-2016-0016.Search in Google Scholar

[33] C. Knaak, U. Thombansen, P. Abels, and M. Kröger, “Machine learning as a comparative tool to determine the relevance of signal features in laser welding,” Procedia CIRP, vol. 74, pp. 623–627, 2018. https://doi.org/10.1016/j.procir.2018.08.073.Search in Google Scholar

[34] H. Wu, X. Zhang, H. Xie, Y. Kuang, and G. Ouyang, “Classification of solder joint using feature selection based on Bayes and support vector machine,” IEEE Trans. Compon. Packag. Manuf. Technol., vol. 3, no. 3, pp. 516–522, 2013. https://doi.org/10.1109/tcpmt.2012.2231902.Search in Google Scholar

[35] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017. https://doi.org/10.1145/3065386.Search in Google Scholar

[36] Y. Yang, L. Pan, J. Ma, et al.., “A high-performance deep learning algorithm for the automated optical inspection of laser welding,” Appl. Sci., vol. 10, no. 3, p. 933, 2020. https://doi.org/10.3390/app10030933.Search in Google Scholar

[37] Y. Yang, R. Yang, L. Pan, et al.., “Real-time monitoring of high-power disk laser welding statuses based on deep learning framework,” J. Intell. Manuf, vol. 31, pp. 799–814, 2020. https://doi.org/10.1007/s10845-019-01477-w.Search in Google Scholar

[38] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826.10.1109/CVPR.2016.308Search in Google Scholar

[39] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (ICLR), 2015.Search in Google Scholar

[40] A. Howard, M. Sandler, G. Chu, et al.., “Searching for MobileNetV3,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2017, pp. 1314–1324.Search in Google Scholar

[41] P. Stritt, M. Boley, A. Heider, et al.., “Comprehensive process monitoring for laser welding process optimization,” in Proc. SPIE 9741, High-Power Laser Materials Processing: Lasers, Beam Delivery, Diagnostics, and Applications V, vol. 9741, 2016, pp. 193–202.10.1117/12.2212814Search in Google Scholar

[42] D. You, X. Gao, and S. Katayama, “Data-driven based analyzing and modeling of MIMO laser welding process by integration of six advanced sensors,” Int. J. Adv. Des. Manuf. Technol., vol. 82, nos. 5–8, pp. 1127–1139, 2016. https://doi.org/10.1007/s00170-015-7455-x.Search in Google Scholar

[43] Y. Zhang, D. You, X. Gao, N. Zhang, and P. P. Gao, “Welding defects detection based on deep learning with multiple optical sensors during disk laser welding of thick plates,” J. Manuf. Syst., vol. 51, pp. 87–94, 2019. https://doi.org/10.1016/j.jmsy.2019.02.004.Search in Google Scholar

[44] T.-H. Kim, T.-H. Cho, Y. S. Moon, and S. H. Park, “Visual inspection system for the classification of solder joints,” Pattern Recognit., vol. 32, no. 4, pp. 565–575, 1999. https://doi.org/10.1016/s0031-3203(98)00103-4.Search in Google Scholar

[45] S.-C. Lin, C. H. Chou, and C.-H. Su, “A development of visual inspection system for surface mounted devices on printed circuit board,” in IECON 2007-33rd Annual Conference of the IEEE Industrial Electronics Society, 2007, pp. 2440–2445.10.1109/IECON.2007.4459975Search in Google Scholar

[46] K. Y. Chan, K. F. C. Yiu, H.-K. Lam, and B. W. Wong, “Ball bonding inspections using a conjoint framework with machine learning and human judgement,” Appl. Soft Comput., vol. 102, p. 107115, 2021. https://doi.org/10.1016/j.asoc.2021.107115.Search in Google Scholar

[47] P. M. Dold, F. Bleier, M. Boley, and R. Mikut, “Multi-stage inspection of laser welding defects using machine learning,” in Proceedings 32. Workshop Computational Intelligence, vol. 1, 2022, pp. 31–52.Search in Google Scholar

[48] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.10.1109/CVPR.2016.90Search in Google Scholar

[49] M. Christ, N. Braun, J. Neuffer, and A. W. Kempa-Liehr, “Time series feature extraction on basis of scalable hypothesis tests (tsfresh – a Python package),” Neurocomputing, vol. 307, pp. 72–77, 2018. https://doi.org/10.1016/j.neucom.2018.03.067.Search in Google Scholar

[50] A. G. Howard, M. Zhu, B. Chen, et al.., “Efficient convolutional neural networks for mobile vision applications,” arXiv:1704.04861, 2017.Search in Google Scholar

[51] F. Pedregosa, G. Varoquaux, A. Gramfort, et al.., “Scikit-learn: machine learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, 2011.Search in Google Scholar

[52] F. Chollet, “Keras,” in GitHub, 2015. Available at: https://github.com/fchollet/keras.Search in Google Scholar

[53] M. Abadi, P. Barham, J. Chen, et al.., “TensorFlow: a system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016, pp. 265–283.Search in Google Scholar

[54] A. Brand, L. Allen, M. Altman, M. Hlava, and J. Scott, “Beyond authorship: attribution, contribution, collaboration, and credit,” Learn. Publ., vol. 28, no. 2, pp. 151–155, 2015.10.1087/20150211Search in Google Scholar

Received: 2023-03-27
Accepted: 2023-08-16
Published Online: 2023-10-17
Published in Print: 2023-10-26

© 2023 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 22.12.2024 from https://www.degruyter.com/document/doi/10.1515/auto-2023-0044/html
Scroll to top button