[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to content
BY 4.0 license Open Access Published by De Gruyter October 28, 2023

Research on the center extraction algorithm of structured light fringe based on an improved gray gravity center method

  • Jun Wang , Jingjing Wu , Xiang Jiao EMAIL logo and Yue Ding

Abstract

In this study, we proposed a fast line-structured light stripe center extraction algorithm based on an improved barycenter algorithm to address the problem that the conventional strip center extraction algorithm cannot meet the requirements of a structured light 3D measurement system in terms of speed and accuracy. First, the algorithm performs pretreatment of the structured light image and obtains the approximate position of the stripe center through skeleton extraction. Next, the normal direction of each pixel on the skeleton is solved using the gray gradient method. Then, the weighted gray center of the gravity method is used to solve the stripe center coordinates along the normal direction. Finally, a smooth strip centerline is fitted using the least squares method. The experimental results show that the improved algorithm achieved significant improvement in speed, sub-pixel level accuracy, and a good structured light stripe center extraction effect, as well as the repeated measurement accuracy of the improved algorithm is within 0.01 mm, and the algorithm has good repeatability.

1 Introduction

Linear structured light three-dimensional (3D) topography reconstruction technology has been widely used in 3D measurement fields such as automobile, medicine, and mold manufacturing owing to its advantages of noncontact, high precision, and good real-time performance [1,2,3,4,5]. According to the principle of 3D measurement of structured light, the structured light generator projects structured light on the surface of the object to produce a light stripe image containing the surface topography information of the object and the relative position information between the camera and the structured light generator. To obtain the 3D information contained in the structured light stripe, the precise position of the stripe center must be obtained first. In general, the captured structured light stripe has a certain width, ranging from a dozen pixels to a few dozen pixels. Therefore, the stripe line with a certain width must be transformed into a single-pixel stripe line to accurately obtain the center position information. This process is called the extraction of the centerline of the structured light stripe [6,7,8,9,10,11,12].

The extraction accuracy of the centerline of linear structured light fringes directly affects the accuracy of the 3D measurement system and plays a vital role in the overall performance of the system. The common algorithms for strip center extraction includes the extreme value method, geometric center method, gray gravity center method, direction template method, and Steger method [13,14]. The principle of the extremum method is simple and fast, but it is sensitive to noise and susceptible to light saturation points. The geometric center method is simple and fast, but it requires high precision for edge detection. The gray gravity center method means that according to the coordinate value of each pixel and its gray value, the coordinate point where the gray gravity is located is found as the center point. The method is fast and accurate, but it has high requirements for the width and direction of the light strip. The direction template method has high precision, but it requires a large computation time and yields poor real-time performance. The Steger method based on the Hessian matrix has high precision, but it is not easy to smooth the data along the normal direction of the light strip, and the extraction speed is slow [15].

To address the advantages and disadvantages of common conventional algorithms, experts and scholars worldwide have proposed many improved algorithms that are combined with the actual situation. Li et al. [16] proposed an improved gray gravity method algorithm that added the least squares template based on the gray gravity center method to calculate the direction and curvature of each pixel on the light strip. This method has high extraction accuracy but requires a large amount of computation. Zhang et al. [17] proposed a variable width gray center of the gravity method, which used the adaptive binarization method to determine the edge of the light strip and then the gray center of the gravity method to determine the center. This method can improve the accuracy of the center extraction of structured light, but its anti-noise capability is poor. Wu et al. [18] proposed a gray gravity center method based on a directional template that used four templates, namely horizontal, vertical, left-leaning 45°, and right-leaning 45°, to determine the normal direction of the light strip. This method has strong anti-noise capability and the ability to repair broken lines; however, it requires a large amount of computation and is sensitive to the direction of the light strip.

In practical engineering applications, the reflected structural light is scattered in all directions owing to the uneven surface roughness of the measured object, resulting in an uneven distribution of the structural light intensity, which affects the image quality of structured light. To solve these problems, this study proposes a fast stripe center extraction algorithm based on the improved gray gravity center method, which can obtain the stripe center information accurately and quickly.

2 Basic principle of the gray gravity center method

The linear structured light projected using the linear structured light generator is generated by a combination of the point laser light source passing through the cylindrical and spherical mirrors. The light intensity in its section generally follows an ideal Gaussian distribution, and its mathematical expression is as follows:

(1) G ( x ) = 1 2 π σ exp ( x μ ) 2 2 σ 2 ,

where μ is the mathematical expectation and σ is the mean square error.

However, in practice, the laser reflectivity of the workpiece is not the same owing to the material of the measured object itself, reflectivity, roughness, etc., and the light intensity of the structured light strip section also changes. The image of the structured light on the surface of the measured object is easily affected by diffuse reflection, generating a large amount of random noise distributed around the structured light stripes. As a result, the gray level of the cross-section of the light strip on the image presents an asymmetric approximate Gaussian distribution, as shown in Figure 1.

Figure 1 
               Gray scale curve of a light strip section.
Figure 1

Gray scale curve of a light strip section.

In physics, every rigid body has a center of mass, and its transfer to an image can be considered to be the center of gravity of the image [19]. Let the coordinates of n discrete particle points on the X-axis be x i and the mass be m i ; then, the center of mass of n points (x c ) is the average position of each particle according to the proportion of its mass in the mass of the particle system, see formula (2).

(2) x c = i = 1 n m i x i / i = 1 n m i .

In the center extraction of structured light stripes, a new method of strip center extraction based on gray barycenter is presented. The conventional gray center method to extract structured light stripe center is fast and highly precise; however, its shortcomings include: (1) The stripes on the gray center method are mainly based on the image gray center of gravity and the extracted center area, and when the stripe width, which is closely related to the width of the light stripe, is changed, the precision of the gray center method is also different. (2) The gray gravity center method uses some pixel points with higher gray values on the light strip to determine the stripe center. The accuracy is affected by the energy concentration of the light strip. However, because of the influence of the actual surface roughness of the object to be measured, the brightness of the light strip is not concentrated and the dispersion is large, which affects the extraction accuracy. (3) The gray gravity center method is based on line and column scanning, which is mainly performed in two directions. When the direction of the stripe line in the image changes irregularly, its performance is easily affected, leading to a decrease in the accuracy of the center extraction.

In view of the shortcomings of the conventional gray center of gravity method, an improved gray center of gravity method is proposed in this article for fast extraction of light strip center to improve the judgment of the direction of the light strip, the limitation of its width, and the extraction speed while ensuring accuracy.

3 Fast strip center extraction algorithm based on an improved gray gravity center method and test design

In this study, the algorithm flow of a fast stripe center extraction based on the basic improved gray gravity center method can be summarized as follows: (1) input the structured light stripe image; (2) preprocess the input image; (3) extract the rough center line of the stripe, i.e., the skeleton of the stripe of a single pixel; (4) the mean square gray gradient method was used to solve the normal direction of each pixel point on the stripe skeleton; (5) the weighted gray gravity center method was used to solve the coordinates of the center point of the light strip along the normal direction; and (6) the least squares method was used to fit the precise center line of the fringe and output the parameters. A flowchart of the improved algorithm proposed in this study is shown in Figure 2.

Figure 2 
               Fast strip center extraction algorithm flow chart based on the improved gray gravity center method.
Figure 2

Fast strip center extraction algorithm flow chart based on the improved gray gravity center method.

3.1 Image preprocessing based on connected domain analysis

To suppress random noise, contrast reduction, image blurring, and other problems generated in the process of image acquisition, as well as reduce the interference of noise with effective information extraction of structured light images and improve the image quality, it is necessary to perform image preprocessing before extracting the centerline of the structured light stripe. In the actual structured light images collected, only a small part of the image contains stripe information. To improve the processing efficiency, the stripe region should be extracted. In view of the above requirements, an image preprocessing algorithm based on connected domain analysis is proposed in this study. The algorithm flow is as follows: (1) mask operation and connected domain analysis were performed to extract the striped region in the image; (2) median filtering was performed on the extracted fringe region; (3) binarization of the filtered image was performed; and (4) morphological open-close operation was used to complete the final processing.

  1. A mask operation was carried out for different target images in the stripe region extraction to remove areas where the gray value is within a certain range to suppress interference caused by light changes to the extraction. After the mask operation, the connected domain of the structured light image was analyzed, and the image of the structured light region was extracted. The connected domain search is primarily based on the connected domain labeling method. After connected domain labeling, the unconnected regions were counted and marked. Small connected regions can be screened and removed (spot interference) according to the area size of each connected domain, and the main structured light stripes of interest can be retained.

  2. The selection of the filtering algorithm is mainly based on the characteristics of the structured light fringe image, fringe edge retention effect after processing, noise suppression effect, and processing time. In this study, a median filter was selected to process the structured light stripe image to suppress salt and pepper noise.

  3. The filtered image was segmented using a threshold to obtain a binary image. The fixed threshold method was used in this study for image segmentation, and the fixed threshold was 170 owing to the significant difference between the gray scale of the fringe and the background in the linearly structured light stripe image.

  4. The structured light stripe images projected on their surfaces will have broken lines or holes owing to the different reflective characteristics of different object surfaces, resulting in the loss of center point information, which is also not conducive to subsequent center extraction. Therefore, in this study, open and close operations were applied to the morphological processing method of the image after threshold segmentation.

3.2 Center extraction of structured light strip

After the structured light image is preprocessed, a stripe image with a higher quality is obtained, and the stripe center is extracted. In this study, the center extraction algorithm of structured light was mainly divided into three aspects: the rough location of the center of the light strip, the solution of the normal direction of the light strip, and the precision location and fitting of the center line of the light strip.

3.2.1 Coarse positioning of the center of the light strip

To determine the exact position of the center line of the light stripe, the skeleton extraction algorithm was first used to obtain a rough location of the center line of the light stripe. Image skeleton extraction is also known as image refinement, and the wider target features in the image were compressed and extracted to obtain a thinner target, which retains the shape of the original feature. The image skeleton presents the state of a single pixel until the shape of the target region does not change.

The skeleton extraction algorithm adopted in this study is the Zhang fast parallel refining algorithm [20], which has the advantages of convergence, connectivity, and rapidity. The specific steps of this method are as follows. Assume that the octet neighborhood of P 1 is as shown in Figure 3. P 1 is a foreground point with a value of 1. The implementation of the algorithm is mainly divided into two steps.

Figure 3 
                     Neighborhood pixel arrangement.
Figure 3

Neighborhood pixel arrangement.

In the first step, if the octet neighborhood of P 1 meets the following four conditions, then P 1 is deleted, i.e., P 1 = 0. In practical operation, P 1 is not set to 0 immediately but is marked first. After calculating and judging all the boundary points, all the corresponding marked points are deleted. The following conditions should be met:

  1. 2 ≤ N(P 1) ≤ 6, where N(P 1) is the number of nonzero points in the octet neighborhood;

  2. Z(P 1) = 1, where Z(P 1) is the number of changes in the values of these points in the sequence of P 2, P 3, …, P 8, P 9;

  3. P 2 × P 8 × P 6 = 0 ;

  4. P 4 × P 8 × P 6 = 0 .

In the second step, the second time is deleted according to the following four conditions:

  1. 2 ≤ N(P 1) ≤ 6, where N(P 1) is the number of nonzero points in the octet with P 1 as the center;

  2. Z(P 1) = 1, where Z(P 1) is the number of changes in the values of these points in the sequence of P 2, P 3, …, P 8, P 9;

  3. P 2 × P 4 × P 6 = 0 ;

  4. P 4 × P 4 × P 8 = 0 .

The image preprocessed as described in Section 3.1 is extracted for a skeleton, and the result is shown in Figure 4.

Figure 4 
                     Coarse positioning of the center of the light strip.
Figure 4

Coarse positioning of the center of the light strip.

3.2.2 Solve the normal direction of the light strip

To accurately obtain the center position of the structured light, the normal direction of the optical stripe skeleton should be determined, and then, the central coordinates along its normal direction should be solved. The common methods to solve the normal direction of the light strip include the direction template matching method and the Hessian matrix method. These algorithms need to perform complex template matching calculations, which are time-consuming, easily affected by noise, and cannot meet real-time measurement requirements. To solve this problem, this study used the mean square gray gradient method to calculate the normal direction of the pixel points on the fringe based on the extracted optical stripe skeleton presented in Section 3.2.1. The direction angle of each pixel was calculated using the gradient method. Assuming that the coordinates of a pixel point in the image are (x, y), its gray value is defined as f(x, y), and the direction angle is defined as θ(x, y). Details of the calculation of the direction angle are presented below.

In a complex plane, if any vector is squared, the angle between the vector and the positive direction of the X-axis doubles. The gray gradient of any point in the image coordinate system is expressed as F = (f x , f y ). This value represents a vector on a complex plane, and the square of the vector is obtained as follows:

(3) ( f x + j f y ) 2 = ( f x 2 f y 2 ) + j ( 2 f x f y ) ,

where f x and f y are the partial derivatives of points (x, y) in the x- and y-directions, respectively. In this study, the Sobel gradient operator [21] was used to solve the gradient vector of point (x, y). The template of the Sobel gradient operator in the x-direction is shown in Figure 5, and the template in the y-direction is the transpose in the x-direction.

Figure 5 
                     
                        X-direction template.
Figure 5

X-direction template.

To improve the accuracy of solving the normal direction, this study considered the pixel point on the skeleton within a local range (w × w) around it, and this range selects different values according to stripes of different widths. The pixel point (x, y) selected in the above text is taken as the center, and the surrounding W × W interval is taken as the research object. The square of the gray gradient of all the pixel points in the region is solved; then, the average of all the square values calculated is used to determine the normal direction of the point. That is, the direction field of the local block area is calculated, and the direction field is used as the basis for the gray extraction direction. The calculation formula for the direction angle θ(x, y) of this point is as follows:

(4) v x ( x , y ) = u = x w 2 x + w 2 v = y w 2 y + w 2 ( f x 2 ( u , v ) f y 2 ( u , v ) ) v y ( x , y ) = u = x w 2 x + w 2 v = y w 2 y + w 2 ( 2 f x ( u , v ) f y ( u , v ) ) θ ( x , y ) = tan 1 v y ( x , y ) v x ( x , y ) ,

where V x (x,y) and V y (x,y) are the real and imaginary parts of the mean square gray gradient vector, respectively, f x ≠ 0, and f y ≠ 0. When the calculation result of f x or f y is 0, the direction angle of the pixel is 0. In summary, the normal direction of the light bar at point (x,y) is defined as T(x,y) based on the relationship between the direction angle and the normal direction, and the normal expression is given as follows:

(5) T ( x , y ) = 1 2 θ ( x , y ) + π 2 f x > 0 1 2 θ ( x , y ) + 3 π 2 f x < 0 , f y > 0 1 2 θ ( x , y ) f x < 0 , f y < 0 .

The mean square gray gradient method was used to solve the normal direction of the image extracted from the skeleton, and the results are shown in Figure 6. The small red line segment in the figure indicates the normal direction of the light strip at each point, whereas the black line is the center of the extracted rough light strip. Because the width of the light strip is approximately 20 pixels, the interval size of the selected calculation in the normal direction is w = 20.

Figure 6 
                     Normal effect of the local light strip.
Figure 6

Normal effect of the local light strip.

3.2.3 Accurate positioning and fitting of the center line of the strip

The extracted optical stripe skeleton presented in Section 3.2.1 is the rough geometric center of the optical stripe rather than the energy center of the optical stripe. The energy center of the light strip refers to the brightest pixel point on the light strip. To obtain the energy center, it is necessary to first obtain the extreme energy point, namely, the maximum gray point. This study starts with a stripe skeleton. The energy extremum point is found along the normal direction of each pixel point because the normal direction of each pixel point on the skeleton has been obtained. Because the structured light generated by the laser has a certain width, the light saturation region will be generated when it is illuminated on the surface of the object, resulting in multiple points with a maximum gray value. In this case, the point located in the middle was selected as the extreme energy point.

In this study, the point on the rough centerline of the extracted light strip, namely, the point on the skeleton, was selected as the extreme energy point. Take n discrete points on both sides of the point along the normal direction, with a total of 2n + 1 points. The number of discrete points can be adjusted according to the actual need. The gray value of the selected 2n + 1 points is recorded in the set G(x,y), and the position of the center point of this light strip (x c , x y ) can be determined using the weighted barycenter method [22] as follows:

(6) x c = 2 n + 1 x G 2 ( x , y ) 2 n + 1 G 2 ( x , y ) y c = 2 n + 1 y G 2 ( x , y ) 2 n + 1 G 2 ( x , y ) .

In this study, n = 10 was set based on the light bar width. According to the above method, each point on the skeleton is traversed in turn to find the pixel coordinates of the center point of the entire light strip.

Although the center pixel coordinates of the light strip are obtained, the center line obtained by connecting these coordinates still has a broken line defect, which is not a smooth straight line. In this study, the least squares method was used to fit the center coordinates of the fringe into a smooth line, and the final equation of the line was obtained. The linear equation is positioned as follows:

(7) f ( x ) = a x + b ,

where a is the slope of the line and b is the intercept of the line.

3.2.4 Test design

In this study, the selected test object is an elevator guide rail, an important component commonly used in engineering, as shown in Figure 7. The structural light was projected on the side of the elevator guide rail (side B) to compare the extraction accuracy and speed of the conventional gray gravity center method, geometric center method, direction template method, and improved center extraction algorithm proposed in this study. The structured light was projected on the side (side B) and bottom (side D) simultaneously, and the workpiece was taken 10 times without moving. The proposed algorithm was used to extract the center, and the verticality values of the bottom and side were obtained by combining the known system calibration parameters and measurement model to verify the stability of the algorithm.

Figure 7 
                     Elevator guide rail contour diagram. (a) Exterior view 1 and (b) exterior view 2.
Figure 7

Elevator guide rail contour diagram. (a) Exterior view 1 and (b) exterior view 2.

4 Results and discussion

To verify the effectiveness of the proposed algorithm, this study used an actual engineering application for comparing the extraction accuracy and speed of the conventional gray direction of the center of gravity method, geometric center method, template method, and improved contrast test of the proposed center of extraction algorithm, and the stability of the proposed algorithm was verified based on an actual measurement object. The experimental implementation environment of the algorithm used in this study is as follows: Intel(R) Core i7-6700 CPU, 8GB memory PC, Windows 10 64 bit operating system, and MATLAB R2014a software platform.

4.1 Comparison test of strip center extraction accuracy

Figure 8 shows a comparison of the test results of different extraction methods of the strip center. The coordinates of the center points of 51 groups of light strips in the extraction result graphs of different algorithms are depicted on the same graph to analyze the results of different extraction algorithms more clearly, as shown in Figure 9.

Figure 8 
                  Comparison of experimental results of different strip center extraction algorithms. (a) Gray gravity center method. (b) Geometric center method. (c) Direction template method. (d) The algorithm proposed in this study.
Figure 8

Comparison of experimental results of different strip center extraction algorithms. (a) Gray gravity center method. (b) Geometric center method. (c) Direction template method. (d) The algorithm proposed in this study.

Figure 9 
                  Comparison of extraction results of different algorithms.
Figure 9

Comparison of extraction results of different algorithms.

It can be observed from Figures 8 and 9 that the result extracted using the gray gravity center method is too large to be affected by noise. The vertical coordinate of the center of the light strip extracted using the geometric center method is always 55 pixels, and the accuracy is not high. The direction template method generally defines four common directional templates, which deviate from the normal direction of the structured light fringe in this study, resulting in small extraction results. The extraction accuracy of the algorithm proposed in this study is high at the sub-pixel level, and the extracted results are more likely to be the actual results. By enlarging the image of the extracted results of the algorithm in Figure 8, it can be observed that the center line is smooth.

4.2 Comparison test of strip center extraction speed

The four center extraction algorithms presented in Section 4.1 were used to extract the light bar center of the same image, and the four algorithms are represented as algorithms 1, 2, 3, and 4, respectively. The running times of the four algorithms are listed in Table 1, and the average time of running the algorithm 10 times is shown in Figure 10. It can be observed from the results that the running time of the direction template method is the longest because the complex template matching operation increases the processing time. The geometric center method is the fastest, but its accuracy is not sufficient. The speed of the improved algorithm proposed in this study is much faster than that of the direction template method, and the efficiency is also improved compared with that of the conventional gray gravity center method.

Table 1

Comparison of the running time of different algorithms

Order Algorithm 1 time (s) Algorithm 2 time (s) Algorithm 3 time (s) Algorithm 4 time (s)
1 0.02905 0.005204 0.081017 0.01219
2 0.02011 0.005705 0.081692 0.011936
3 0.02626 0.006336 0.08247 0.01132
4 0.02615 0.006334 0.081937 0.012239
5 0.02509 0.006011 0.080342 0.011814
6 0.02526 0.008527 0.081192 0.010233
7 0.02575 0.005104 0.081897 0.011387
8 0.02480 0.008751 0.080544 0.013379
9 0.02502 0.004566 0.082177 0.011621
10 0.02533 0.007477 0.082765 0.013401
Average 0.02528 0.00664 0.0816 0.011952
Figure 10 
                  Comparison of the running time of different strip center extraction methods.
Figure 10

Comparison of the running time of different strip center extraction methods.

4.3 Improved algorithm repeatability test

The improved algorithm proposed in this study was used to extract the two-dimensional information of the center feature point of the structured light stripe, and the 3D spatial information was obtained through coordinate transformation. The measurement mathematical model was applied to obtain the verticality values of the bottom and side surfaces of the elevator guide rail. The measurement results are listed in Table 2. It can be observed from the table that the repeated measurement accuracy is within 0.01 mm, and the algorithm has good repeatability.

Table 2

Verticality measurement values

Order 1 2 3 4 5
Measured value (mm) 0.024 0.021 0.026 0.027 0.029
Order 6 7 8 9 10
Measured value (mm) 0.025 0.022 0.026 0.028 0.023

5 Conclusion

In this study, an algorithm is proposed based on the improved gray gravity center method to extract the center-of-line structured light, which greatly improves the accuracy and speed of the line-structured light extraction. The image preprocessing algorithm based on connected domain analysis effectively improved the image quality and subsequent processing speed, and the normal direction judgment method based on the mean square gray gradient improved real-time processing on the premise of ensuring accuracy. At the same time, the weighted grayscale barycenter method was used to calculate the pixel coordinates of the center point, and the least squares method was used to fit the smooth center line of the light strip. The experimental results show that the algorithm proposed in this study greatly improved the speed of extracting the center-of-line structured light, and the accuracy can reach the sub-pixel level and has good repeatability, which meets the requirements of a 3D measurement system. The algorithm proposed in this study effectively solved the problem of an uneven structured light. However, there were still shortcomings in issues such as uneven distribution of light strip width, local disconnection, and real-time performance, which were needed to be optimized and improved further.

In practical engineering applications, the extraction of structured light centers is easily influenced by multiple factors, and traditional models are difficult to overcome all the problems of structured light center extraction. Therefore, future line-structured center extraction should gradually be oriented toward the deep learning fields, as it has more flexibility, generalization, and real-time capability, which can significantly improve the real-time and applicability of line-structured center extraction.

  1. Funding information: The authors sincerely appreciate the Natural Science Research Project of industry–university–research cooperation project in Jiangsu Province (No. BY2019043), the young and middle-aged academic leader of Jiangsu University’s “Qinglan Project”, the Natural Science Research Project of Wuxi Institute of Technology (No. ZK2023010), the Education Department of Jiangsu Province, high-end training for teachers in higher vocational colleges in Jiangsu Province, Natural Science Research Project of Higher Education in Jiangsu Province (No. 20KJB520022), the scientific research subject of Wuxi Institute of Technology (No. BT2018-02), the enterprise practice training project for young teachers of higher vocational colleges in Jiangsu Province (No. 2020qysjpx186), and innovation and entrepreneurship training program for college students in Jiangsu Province (No. 202010848037Y).

  2. Author contributions: Jun Wang conducted the experiments, analyzed the data and wrote the manuscript; Jingjing Wu, Xiang Jiao designed the project and revised the manuscript; Yue Ding put forward very good ideas in the data analysis.

  3. Conflict of interest: The authors declare that they have no conflict interests.

  4. Data availability statement: Data will be made available on request.

Reference

[1] Ma Y, Wang Z, Yang G, Wang P. A system based on structured-light sensors for measurement of pavement evenness. Chin J Sens Actuators. 2013;26(11):1597–603.Search in Google Scholar

[2] Zhongjun D, Ziyi Z, Chuntang Z, Wenchao P, Yumeng L. 3D reconstruction of deep sea geomorphologic linear structured light based on manned submersible. Infrared Laser Eng. 2019;48(5).10.3788/IRLA201948.0503001Search in Google Scholar

[3] Xu Z, Forsberg E, Guo Y, Cai F, He S. Light-sheet microscopy for surface topography measurements and quantitative analysis. Sensors. 2020;20(10):E2842.10.3390/s20102842Search in Google Scholar PubMed PubMed Central

[4] Zhao C, Yang J, Zhou F, Sun J, Li X, Xie W. A robust laser stripe extraction method for structured-light vision sensing. Sensors. 2020;20(16):E4544.10.3390/s20164544Search in Google Scholar PubMed PubMed Central

[5] Zhang L, Zhang Y, Chen B. Improving the extracting precision of stripe center for structured light measurement. Optik. 2020;207:163816.10.1016/j.ijleo.2019.163816Search in Google Scholar

[6] Su X, Xiong X. High-speed method for extracting center of line structured light. J Comput Appl. 2016;36(1):238–42.Search in Google Scholar

[7] Shi X, Sun Y, Liu H, Bai L, Lin C. Research on laser stripe characteristics and center extraction algorithm for desktop laser scanner. SN Appl Sci. 2021;3(3):2523–3963.10.1007/s42452-021-04309-wSearch in Google Scholar

[8] Wang Z, Liu S, Hu J, Zhang W, Huang H, Liu J. Line structured light 3D measurement technology for pipeline microscratches based on telecentric lens. Opt Eng. 2021;60(12):124108.10.1117/1.OE.60.12.124108Search in Google Scholar

[9] Yu W, Li Y, Yang H, Qian B. The centerline extraction algorithm of weld line structured light stripe based on pyramid scene parsing network. IEEE Access. 2021;9(1):1105144–52.10.1109/ACCESS.2021.3098833Search in Google Scholar

[10] He Z, Kang L, Zhao X, Zhang S, Tan J. Robust laser stripe extraction for 3D measurement of complex objects. Meas Sci Technol. 2021;32(6):065002.10.1088/1361-6501/abd57bSearch in Google Scholar

[11] Wan Z, Lai L, Yin X, Mao J, Zhu L. Robot line structured light vision measurement system: light strip center extraction and system calibration. Opt Eng. 2021;60(11):114102.10.1117/1.OE.60.11.114102Search in Google Scholar

[12] Zeng H, Weiming L, Xingyu G, Bingqiang Y, Chuannen W. Fast center extraction algorithm for line structured laser stripe of antiwelding slag spatter. Laser Optoelectron Prog. 2022;59(16):1611011.10.3788/LOP202259.1611011Search in Google Scholar

[13] Li YH, Zhou JB, Liu LJ. Research progress of the line structured light measurement technique. J Hebei Univ Sci Technol. 2018;39(2):115–24.Search in Google Scholar

[14] Pang S, Yang H. An algorithm for extracting the center of linear structured light fringe based on directional template. 2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE); 2021. p. 203–7.10.1109/AEMCSE51986.2021.00049Search in Google Scholar

[15] Steger C. An unbiased detector of curvilinear structures. IEEE Trans Pattern Anal Mach Intell. 1998;20(2):113–25.10.1109/34.659930Search in Google Scholar

[16] Li Y, Zhou J, Huang F, Liu L. Sub-pixel extraction of laser stripe center using an improved gray-gravity method. Sensors. 2017;17(4):814.10.3390/s17040814Search in Google Scholar PubMed PubMed Central

[17] Zhang XY, Wang XQ, Bai FZ, Tian CP, Mei XZ. Improved gray centroid method for extracting the centre-line of light-stripe. Laser Infrared. 2016;46(5):622–6.Search in Google Scholar

[18] Wu QY, Su XY, Li JZ. A new method for extracting the centre-line of line structure light-stripe. J Sichuan Univ (Eng Sci Ed). 2007;39(4):151–5.Search in Google Scholar

[19] Zhang X. Research on Robotic Welding System and Multipass Planning Based on Laser Vision Sensor[D]. Shanghai: Shanghai Jiao Tong University.Search in Google Scholar

[20] Zhang TY, Suen CY. A fast parallel algorithm for thinning digital patterns. Commun ACM. 1984;27(3):236–9.10.1145/357994.358023Search in Google Scholar

[21] Zhang XB, Liu W. A new video segmentation method based on modified watershed combined with temporal information. Chin J Sens Actuators. 2007;20(10):2248–52.Search in Google Scholar

[22] Weiwei Z, Haiyan L, Xiu W, Li C. Experimental study on sub-pixel subdivision location of linear CCD based on gray weighted centroid algorithm. Opt Tech. 2018;44(4):476–9.Search in Google Scholar

Received: 2022-04-23
Revised: 2023-08-19
Accepted: 2023-08-28
Published Online: 2023-10-28

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 9.1.2025 from https://www.degruyter.com/document/doi/10.1515/jisys-2022-0195/html
Scroll to top button