[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115482195B - Train part deformation detection method based on three-dimensional point cloud - Google Patents

Train part deformation detection method based on three-dimensional point cloud Download PDF

Info

Publication number
CN115482195B
CN115482195B CN202210930378.6A CN202210930378A CN115482195B CN 115482195 B CN115482195 B CN 115482195B CN 202210930378 A CN202210930378 A CN 202210930378A CN 115482195 B CN115482195 B CN 115482195B
Authority
CN
China
Prior art keywords
point cloud
dimensional
box body
deformation
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210930378.6A
Other languages
Chinese (zh)
Other versions
CN115482195A (en
Inventor
秦娜
杜元福
刘佳辉
周期
谢林孜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210930378.6A priority Critical patent/CN115482195B/en
Publication of CN115482195A publication Critical patent/CN115482195A/en
Application granted granted Critical
Publication of CN115482195B publication Critical patent/CN115482195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a train part deformation detection method based on three-dimensional point cloud. And (3) combining the PP-PicoDet detection model to position the box body, mapping a target area in the two-dimensional image into the point cloud, and dividing the box body surface point cloud by an MLESAC-based area method. Then SAC-IA and nonLinear ICP algorithm are adopted to finish the registration of the template point cloud and the target point cloud; and finally, judging the deformation faults of the box body through the matching points of the two point clouds and the RMSE index thereof. The invention combines the characteristics and the characteristics of the two-dimensional image and the three-dimensional point cloud, and realizes the functions of positioning, segmentation, registration and deformation detection of various boxes under the complex environment of the subway train bottom by means of the adaptive scenes of the two data processing algorithms. Deformation detection and localization in three-dimensional point cloud space and image pixel space can be achieved by calculating the number of matching points and RMSE of the target point cloud and the template point cloud under the same reference system.

Description

Train part deformation detection method based on three-dimensional point cloud
Technical Field
The invention relates to the technical field of train part deformation fault detection, in particular to a train part deformation detection method based on three-dimensional point cloud.
Background
Along with the application of the Internet of things, information technology, 5G technology, artificial intelligence and big data technology in the urban rail transit industry, the intelligent level of urban rail transit in China is greatly improved. The figure of the intelligent subway is visible everywhere in and out of the station, and passengers can easily enjoy the digital traffic. However, on the other side that the passenger can not feel, the subway inspection mode is mainly manual operation, namely inspection workers check through eyes to subway stop points, and due to the fact that the bottom structure of a subway train is complex, the number and the variety of detection parts are various, the inspection of workers is easily influenced by external environment and psychological factors, the inspection speed of the train is low, the inspection omission ratio is high, the inspection accuracy is low, the inspection cost is high, and finally the operation safety of the train is influenced. Thus, there is a great need for intelligent maintenance of a train by combining current front-end technologies such as computer vision, artificial intelligence, and anomaly detection.
At present, a large amount of machine vision-based imaging system is applied to the fields of civil engineering and industry and has the projects of traffic violation shooting, key area face brushing access control, medical auxiliary diagnosis, film special effect production and the like in the field of civil engineering, provides safety guarantee for people living in peace and happiness, and people can live in a convenient and rich manner. In the industrial field, machine vision can be regarded as a window of soul of an industrial automation system, and the stage which can be played by the machine vision technology is from object/bar code identification, product detection and appearance size measurement to mechanical arm/transmission equipment positioning. The fault detection of machine vision on rail transit is also scheduled at present, and becomes a research hot spot. While the fault detection based on machine vision is in a development stage, most of fault detection algorithms are based on 2D images, the detection methods have low requirements on hardware, the algorithm expansibility is strong, the detection can be completed for most parts, and the image acquisition is simple and quick, and the transmission and the processing are easy. However, the 2D image cannot acquire spatial information of the component to be detected, and is easily affected by ambient light and surface stains. In a subway train, it is difficult to determine abnormality of a part of components by using only 2D information, and it is necessary to complete determination based on spatial information, such as wear amount of consumable components, detection of component size, and the like. Therefore, 3D-based anomaly detection is also a hotspot in current research.
Most of subway appearance detection systems put into use at present collect images of subway train components at fixed line scanning cameras of maintenance points, and when a train passes through the maintenance points to relatively displace with the cameras, the images are collected pixel by pixel line by utilizing a line scanning imaging principle. The acquisition system in the mode can only shoot surface parts at the bottom of a train, some side surfaces and shielded parts cannot be captured, and in addition, a plurality of fine measurement tasks cannot be performed, so that the subway appearance detection system in the mode cannot completely replace manual detection, and the workload of maintenance personnel can only be reduced to a certain extent. The bottom of a subway train is provided with a large number of boxes to be overhauled, such as an auxiliary brake box and a PA box (the PA box is a traction box integrating an auxiliary inverter (DC/AC), the auxiliary brake box is arranged on a C car, half of the auxiliary brake box is an auxiliary inverter, and half of the auxiliary brake box is a VVF traction inverter.), the PH box (the PH box is a traction box integrating a high-voltage device, the auxiliary brake box is arranged on a B car, half of the auxiliary brake box is a HSCB high-speed switch and a high-voltage sensor, and the auxiliary brake box is arranged on a VVF traction inverter), the AB box is arranged on the half of the auxiliary brake box, various precise devices are usually arranged in the boxes, the abnormal deformation of the boxes is likely to cause the damage of internal devices, and huge harm is generated to the running safety of the train.
The method for measuring the deformation of the surface of the object is mainly a grid method and a digital speckle correlation method. When the grid method is used, a specified grid is prefabricated on the surface of an object, the quality requirement for drawing the grid is high, and the measurement deformation precision is limited, so that the application of the grid method is limited to a certain extent. The digital speckle correlation method is an optical measurement method for directly calculating full-field deformation information by matching the characteristics of speckle fields randomly distributed on the surface of an object, and the degree of density of deformation displacement data can be conveniently set in the calculation process according to the needs. At present, a deformation detection method based on machine vision is not applied to deformation detection of a bottom box body of a subway train, and a scheme of deformation detection of the bottom box body of the train is mainly a manual inspection mode. The following three points are probably considered: firstly, the object surface detection technology needs to perform prefabricated grids on the to-be-detected part or needs to generate speckle fields on the surface, and the actual to-be-detected part of the train does not allow the operations; secondly, the environmental factors at the bottom of the train are easily affected by illumination change, dirt shielding and the like, so that the collected image characteristics can be directly changed; thirdly, a negative sample of a part to be tested at the bottom of the train is difficult to obtain, and a sample (negative sample) of deformation of the box body is very few in the data acquired at present. The three-dimensional point cloud can directly obtain the spatial distribution information of the part to be measured, and is certainly very suitable for deformation measurement of the detection problem with obvious spatial distribution change. Three-dimensional point clouds are rarely used in machine vision application research today because: firstly, compared with a two-dimensional image, a three-dimensional point cloud processing algorithm is not mature enough, and a large number of machine vision algorithms are not suitable or difficult to analyze the three-dimensional point cloud; secondly, the high-precision three-dimensional point cloud capturing equipment is high in cost, and generally, a 3D camera is much more expensive than a line scanning camera; thirdly, the calculated amount of processing of the point cloud is large, the high-performance computing machine is relied on, the point cloud data generally comprises space coordinates, color components and even normal vectors of the points, and huge data amount causes mass calculation in the data processing process.
Four key points which can be clearly obtained from the background of the complaint and are needed to be solved by utilizing 2D+3D vision to detect the deformation of the bottom box of the subway train are: (1) The algorithm model must be capable of effectively suppressing interference of environmental factors, such as illumination variation, stains, and the like, and have strong robustness. (2) The algorithm model needs to be able to detect deformation under the condition of few samples. (2) The algorithm model needs to exert the characteristics of the three-dimensional point cloud to finish the fine detection. (3) The algorithm model has the characteristics of high precision, high stability and generalization, and can detect the parts to be detected of the carriages of different train numbers, so that the mode of manual overhaul can be replaced, and the running safety of the train is ensured. (4) The time for overhauling the train is only an operation empty period, and the inspection and maintenance of the whole train are required to be completed in the empty period, so that the detection efficiency is required to be extremely high, the algorithm model is required to accurately complete the detection of the item point in a short time, and a new train part deformation detection method capable of solving the problems is needed.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a train part deformation detection method based on three-dimensional point cloud, which realizes the functions of positioning, segmentation, registration and deformation detection of a box body under the complex environment of the whole subway bottom, judges whether the box body is deformed or not by calculating objective real data, and solves the problems in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions: a method for detecting deformation of a train component based on a three-dimensional point cloud, the method comprising the steps of:
s1, data acquisition: the train inspection trolley is automatically positioned to a designated position, and a two-dimensional image and a three-dimensional point cloud of a box body are acquired through a three-dimensional industrial camera on the mechanical arm;
s2, a PP-PicoDet detection model is adopted to combine the mapping relation between the two-dimensional image and the three-dimensional point cloud, and box point clouds in the three-dimensional point cloud space are positioned;
s3, dividing the box point cloud based on depth information and an MLESAC (random sampling consistency) region division algorithm to obtain a target point cloud P of the box plane o
S4, a target point cloud P of the box body is obtained based on SAC-IA (sampling consistency initial registration algorithm) and nonLinear ICP (iterative closest point) algorithm o And template point cloud P pre-stored in database t Registering;
s5, judging the deformation of the box body;
s6, deformation positioning of the box body.
Preferably, the step S2 specifically includes the following steps:
s21: loading a trained PP-PicoDet target detection model, and carrying out target detection on the box body in a two-dimensional image to obtain two-dimensional coordinates of a box body boundary frame;
s22: mapping the detected two-dimensional coordinates into a three-dimensional point cloud to obtain a box target point cloud P o
Preferably, the step S3 specifically includes the following steps:
s31: and filtering out a large amount of environmental point clouds except the box body by utilizing the point cloud depth information.
S32: and preprocessing point cloud data by adopting a point cloud bilateral filtering method based on normal, and further removing environmental point clouds.
S33: and detecting a global optimal plane model in the point cloud set by using an MLESAC algorithm, and further removing external points such as the environmental point cloud and the like, and outputting internal points conforming to the required plane model.
Preferably, the step S4 specifically includes the following steps:
s41: setting the dimension delta l of the three-dimensional voxel grid to finish the template point cloud P t Downsampling reduces the computational effort of the detection process.
S42: and (3) carrying out rough registration on the target point cloud and the template point cloud by utilizing a SAC-IA algorithm to obtain an initial rotation matrix and a translation vector.
S43: inputting an initial rotation matrix and a translation vector, and performing fine registration on two point clouds by adopting a nonLinear ICP algorithm to obtain a final rotation matrix and a translation vector and a matching point logarithm N between the two point clouds p Wherein the point cloud obtained by spatially transforming the target point cloud is denoted as a registration point cloud P r
Preferably, the step S5 specifically includes the following steps:
s51: obtaining a target point cloud P o And template point cloud P t Matching point logarithm N of (2) p And according to the target point P o Matching point logarithmic threshold T for quantity setting of clouds match ,T match N/λ, N is the target point cloud P o The number of midpoints, lambda is a proportionality constant for realizing program setting;
s52: calculating root mean square RMSE between the box body template point cloud and the target point cloud matching point, and passing through T match And the RMSE completes logic judgment of the deformation of the box body according to the two evaluation parameters, and when the number of the matched points N p Less than threshold T match The box body is deformed, and the similarity degree of the box body is low; when matching the number N of points p Greater than threshold T match When the method is used, the RMSE parameters are used for further judgment, and the RMSE is used for recording the deformation threshold T of the historical data and the standard data R Comparing and judging the deformation state of the box body; RMSE is above threshold T R When the two-point cloud similarity is low, the box body is deformed and is lower than a threshold value T R The two-point cloud is high in similarity degree, and the surface of the box body is normal;
Figure BDA0003780861020000051
wherein ,Np The matching point pair numbers of the target point cloud and the template point cloud.
Preferably, the step S6 specifically includes the following steps:
s61: determining whether positioning is performed according to the deformation judgment in the previous step, if the deformation occurs, performing step S62, otherwise ending the current detection task;
s62: for saved template point cloud P o Performing differential processing on the registration point cloud Pr to obtain a differential point cloud P containing the deformed point cloud d
S63: adopting alpha shape algorithm and extracting registration point cloud P r Setting an alpha shape algorithm parameter rolling radius r=2·Δl, and detecting a complete boundary point cloud P b
S64: computing boundary point cloud P b Centroid p of (2) bc Construction of p bc To any point p of the boundary point cloud bk and pbc Distance p bk Nearest differential point cloud p dk Is defined by two vectors of (2)
Figure BDA0003780861020000061
and />
Figure BDA0003780861020000062
Figure BDA0003780861020000063
wherein ,
Figure BDA0003780861020000068
respectively representing x, y and z coordinates of the corresponding points;
judging whether points in the differential point cloud are all in the boundary point cloud or not, and according to vectors
Figure BDA0003780861020000064
and />
Figure BDA0003780861020000065
The size of (2) is the condition, the judgment relation is as follows:
Figure BDA0003780861020000066
filtering out the out-of-limit points to obtain a deformed point cloud.
Preferably, in the step S64, the centroid of the deformed point cloud is calculated after the deformed point cloud is obtained, and the three-dimensional position Pd of the deformed point cloud is obtained e_c The calculation formula is as follows:
Figure BDA0003780861020000067
wherein ,Pde Represents a deformed point cloud, and n represents a differential point cloud P de The number of midpoints, x i ,y i ,z i Respectively represent the deformation point clouds P de X, y, z coordinates of (c);
according to the mapping relation between the two-dimensional image and the three-dimensional point cloud, obtaining the two-dimensional coordinate position of the image from the three-dimensional space position, and completing the deformation positioning of the box body, wherein the specific formula is as follows:
Figure BDA0003780861020000071
where P (u, v) represents a pixel point of an image pixel coordinate system with an abscissa of u and an ordinate of v, f is a focal length of the camera as a known value, dx and dy represent an actual width and length of each pixel, and P (x c ,y c ,z c ) Representing the coordinates x in the camera coordinate system c ,y c ,z c Is a three-dimensional space point of (c).
The beneficial effects of the invention are as follows:
1) According to the invention, the box body positioning at the bottom of the train bottom is realized by adopting the PP-PicoDet lightweight model, and the problem of positioning the box body in the environment of the train bottom, which is easy to be blocked by brightness change and dirt is solved. In addition, the acquired data comprise two-dimensional images and three-dimensional point cloud data, so that the existing three-dimensional point cloud data can be utilized to carry out data enhancement on a two-dimensional image target positioning algorithm, and the problem of few samples in the training process is solved;
2) The invention designs a method for positioning a target in a three-dimensional space based on a two-dimensional image target, which is characterized in that the two-dimensional boundary frame coordinates of the target are obtained through target positioning in the two-dimensional image, the projection relation of points in an image pixel coordinate system and a camera coordinate system is utilized, then the two-dimensional boundary frame is mapped into the three-dimensional space, the three-dimensional point cloud of the target is screened out, and then the depth information is utilized to filter the point cloud for the second time, so that a large number of independent points, namely Yunnan, are effectively filtered, and the algorithm processing speed is accelerated. The robustness of the algorithm model can be effectively improved by utilizing the target positioning based on deep learning, and the problem of inaccurate target positioning caused by the influence of shooting angles, environment brightness and stains is reduced;
3) The invention designs a region segmentation algorithm combining point cloud bilateral filtering and MLESAC based on normal, solves the problem of point cloud segmentation in a complex environment, and obtains good segmentation effect. And the point cloud bilateral filtering based on the normal line combines the space coordinates and the normal line of the point cloud to perform noise elimination. The MLESAC algorithm is an improvement over RANSAC (random sample consensus) and solves the problem of uncertainty of the RANSAC algorithm and threshold selection of the error function. The plane in the MLESAC detection point cloud and the noise elimination and outlier are realized, so that the algorithm has high robustness;
4) The invention provides a target point cloud and template point cloud registration method, which is characterized in that the target point cloud is transformed to a spatial reference system of the template point cloud by using a SAC-IA coarse registration and nonLinear ICP fine registration method, the number of matching points and RMSE of the target point cloud and the template point cloud are further calculated, and whether a box body is deformed or not is judged. A database of historical data and standard data is established for each box body, so that the detection result is more reliable;
5) The invention designs a point cloud segmentation method combining point cloud difference and alpha shape. The method comprises the steps of firstly, carrying out differential calculation on a template point cloud and a registration point cloud to obtain a differential point cloud, simultaneously, utilizing an alpha shape Ping Miandian cloud boundary extraction algorithm to obtain a point cloud boundary of the registration point cloud, finally, utilizing the point cloud boundary to extract a deformation point cloud, and finally, obtaining three-dimensional space coordinates and image pixel coordinates of the boundary point cloud. By acquiring the position of the deformation point cloud, the problem of positioning deformation is solved, and the whole processing flow of deformation detection of the box body is perfected.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a two-dimensional bounding box mapping to a three-dimensional space screening three-dimensional point cloud;
FIG. 3 is a schematic diagram of an extraction boundary point cloud algorithm;
FIG. 4 is a schematic diagram of screening deformed point clouds and point cloud positioning;
FIG. 5 is a graph showing the deformation measurement results of different types of cases.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-5, the present invention provides a technical solution: a train part deformation detection method based on three-dimensional point cloud, the flow is shown in figure 1, comprising the following steps:
step 1: acquisition and maintenance of 2D and 3D data of railway train box components
The intelligent inspection robot is autonomously positioned at a designated position of a subway train, the three-dimensional industrial camera carried by the mechanical arm on the robot is used for collecting data of box components, and as the lower surface of the box parallel to the ground is most likely to be deformed by external action, the camera is easier to shoot the lower surface of the box, and therefore, when the data is collected, the plane of the camera lens is parallel to the lower surface of the box to shoot box images and three-dimensional point clouds, and then the data is transmitted to the algorithm server for real-time fault detection.
Step 2: positioning the position of a box in a two-dimensional image by adopting a PP-PicoDet target detection model
The number of the point clouds acquired by the three-dimensional industrial camera is up to more than 100 ten thousand, and the direct segmentation and registration of the point clouds can consume great time and calculation force. Therefore, firstly, target detection is carried out in a two-dimensional image, positioning of the box body in the image is completed, and two-dimensional coordinates of the box body boundary frame are obtained. And mapping the two-dimensional coordinates of the box boundary frame into a three-dimensional point cloud space by utilizing the conversion relation between the image pixel coordinate system and the camera coordinate system, and acquiring the box point cloud in the three-dimensional space as shown in fig. 2. The target detection adopts a PP-PicoDet model based on a convolutional neural network, the PP-PicoDet model adopts an anchor-free strategy, a Enhanced ShuffleNet is adopted as a main network, a lightweight Neck and Head structure is designed, the feature extraction capability of the model can be enhanced under good detection precision and time delay, and the network also improves a label distribution strategy and a loss function so as to promote more stable and efficient training. The extraction of the target point cloud block in the subsequent three-dimensional space provides a guarantee.
Step 3: region segmentation algorithm based on depth information and MLESAC (Multi-level object-oriented imaging) acquires target point cloud of box plane
Because the 3D camera captures the two-dimensional image and the three-dimensional point cloud at the same time and at the same position, that is, shooting the two-dimensional image and the three-dimensional point cloud together uses a camera coordinate system, the points in the two-dimensional image space can be mapped onto one line segment in the three-dimensional point cloud space (because the number of the point clouds is limited and the depth of the point clouds is limited). So knowing the point P (u, v) of the image pixel coordinate system, the point coordinates P (x) in the camera coordinate system can be obtained c ,y c ,z c ) The two coordinate conversion relations are:
Figure BDA0003780861020000101
wherein ,zc For the depth of the three-dimensional point cloud, dx, dy are the physical width and physical height of a pixel, respectively, the parameters can be directly obtained, and f is the focal length of the camera as a known value. So by integrating the coordinates boudingbox= (u) of the bounding box min ,v min ,u max ,v max ) Mapping the cloud point cloud to a three-dimensional point cloud space, and obtaining a target point cloud limited by x and y coordinates in the three-dimensional space, wherein the depth of the target point cloud block to be detected is in a set range, so that the accurate target point cloud block is further obtained through screening according to the depth.
Firstly, filtering out a large amount of environmental point clouds except the box body by utilizing the point cloud depth information, preprocessing point cloud data by adopting a point cloud bilateral filtering method based on normal, and further removing the environmental point clouds. The method solves the problem of point cloud segmentation in a complex environment and obtains good segmentation effect. And the point cloud bilateral filtering based on the normal line combines the space coordinates and the normal line of the point cloud to perform noise elimination.
The MLESAC algorithm is an improvement over the random sample consensus (RANdom SAmple Consensus, RANSAC) algorithm, solving the problem of uncertainty and threshold selection of the error function of the RANSAC algorithm. The MLESAC algorithm can calculate a global optimal mathematical plane model through an iterative calculation mode for a group of data containing normal values and abnormal values, wherein the normal values are generally called inner points, the abnormal values are outer points. Dividing the box plane and the environment point cloud through the obtained optimal model, further removing external points such as the environment point cloud, and outputting internal points conforming to the required plane model, thereby obtaining a target point cloud P of the box plane o . The plane in the MLESAC detection point cloud and the noise and outliers are removed, so that the algorithm has high robustness.
Step 4: the SAC-IA (sampling consistency initial registration algorithm) and nonLinear ICP (iterative closest point) algorithm are utilized to obtain the target point cloud P of the box body o And template point cloud P pre-stored in database t Registration is performed
The problem of judging whether the box body is deformed or not can be judged by calculating the similarity of the target point cloud and the template point cloud, so that the first problem is to place the target point cloud and the template point cloud under the same reference system. The size of the point cloud obtained by the previous steps is still large, so that the three-dimensional voxel grid Deltal is constructed to the target point cloud P t Downsampling (downsampling) is performed in which the size of the three-dimensional voxel grid is 0.5mm. For the accuracy of the point cloud registration and the integrity of the template point cloud itself, the point cloud resolution of the template point cloud is required to be higher than the target point cloud, and the size of the downsampled three-dimensional voxel grid is 0.2mm.
Because the repeated positioning precision of the UR5e cooperative mechanical arm is +0.03mm, errors caused by repeated positioning of the mechanical arm can be ignored under the precision, and therefore a rotation transformation operator can be ignored when the coordinate system conversion is carried out, and only translational errors caused by positioning errors of the trolley are considered.
Template point cloud P using SAC-IA t And target point cloud P o Coarse registration is performed to obtain an elementary transformation matrix (rotation matrix) between the two point clouds. Then, the nonLinear ICP algorithm is used for fine registration, but under the condition that the difference of two point clouds is large, the nonLinear ICP algorithm is easy to sink into a local optimal solution, so that a good matching effect cannot be obtained, an initial transformation matrix is obtained by SAC-IA to solve the problem, and finally, a transformation matrix with high precision can be obtained by the nonLinear ICP algorithm, and the target point cloud and the template point cloud are placed in the same reference system.
SAC-IA algorithm pair template point cloud P t And target point cloud P o Coarse registration is performed to obtain an initial rotation matrix R 0 And translation vector T 0 The method comprises the steps of carrying out a first treatment on the surface of the Transforming the target point cloud by an initial transformation matrix and translation vector to obtain a transformed (coarse registered) point cloud P ro Will P t and Pro The point in (a) is subjected to threshold judgment, namely two points in two point clouds are judged
Figure BDA0003780861020000111
Figure BDA0003780861020000112
European spatial distance:
Figure BDA0003780861020000113
distance l ij Comparing with threshold value, screening to obtain P t and Pro Corresponding points of (3)
Figure BDA0003780861020000114
N p Is the logarithm of the corresponding matching point.
The nonlinearICP algorithm builds an objective function:
Figure BDA0003780861020000121
by iterative computation of f (R, T), R, T and finding the corresponding points, the nonLinear ICP algorithm can continuously optimize R, T, obtain the final rotation matrix and translation vector, and the matching point logarithm N between two point clouds p Wherein the point cloud obtained by spatially transforming the target point cloud is denoted as a registration point cloud P r
Step 5: box deformation judgment
In the former step, a transformation matrix of point cloud registration and a matching point logarithm N can be obtained p First, the first deformation judgment is performed by using the number of matching points. Setting a threshold T of proper matching points match ,T match N/λ, N is the number of points in the target point cloud, λ is a proportionality constant for realizing program setting. Below the threshold is T match The method indicates that the deformation degree of the box body is serious, and the number of corresponding points obtained by matching in the registration process is large when the deformation degree of the box body is higher than a threshold value, so that the registration process has no problem.
At the matching point number higher than T match After the threshold value, the next step of judgment is carried out, the RMSE (root mean square) of the corresponding points of the template point cloud and the target point cloud is calculated, and the T is passed through match And the RMSE completes logic judgment of the deformation of the box body according to the two evaluation parameters, and when the number of the matched points N p Less than threshold T match The box body is deformed, and the similarity degree of the box body is low; when matching the number N of points p Greater than threshold T match When the method is used, the RMSE parameters are used for further judgment, and the deformation threshold T recorded by the RMSE, the historical data and the standard data is used for recording R By comparison, RMSE is above the threshold T R When the two-point cloud similarity is low, the box body is deformed and is lower than a threshold value T R And the two-point cloud is high in similarity, and the surface of the box body is normal.
Figure BDA0003780861020000122
wherein ,Np Corresponding points for matching the target point cloud and the template point cloudA number.
Step 6: and positioning the deformation position.
The task of the last step is to judge whether the box body is deformed or not without positioning the deformation position of the box body, so the step passes through the template point cloud P o And registering the point cloud P r . First, the difference processing is carried out on the two front point clouds, and different point cloud areas of the two point clouds are obtained.
In order to enable the target point cloud to find the corresponding point in the template point cloud in the registration process, the template point cloud is required to be a complete point cloud of the box plane. The template point cloud normally contains the registration point cloud, so that the template point cloud and the registration point cloud P are both r The difference operation is carried out to obtain a defect point cloud and a redundant point cloud (the part of the template point cloud is more than the alignment point cloud), which are called as a difference point cloud P d . The point cloud differential operation process is as follows: for each of the template point clouds
Figure BDA0003780861020000131
Searching for nearest point of the registration point cloud using KdTree data structure>
Figure BDA0003780861020000132
If->
Figure BDA0003780861020000133
and />
Figure BDA0003780861020000134
Spatial distance +.>
Figure BDA0003780861020000135
Greater than the set threshold, then this point is not +.>
Figure BDA0003780861020000136
Is adjacent to, so->
Figure BDA0003780861020000137
Is a point in the redundant point cloud or the deformed point cloud.
The obtained point cloud comprises two point clouds, namely redundant point clouds and deformed point clouds, so that the redundant point clouds are filtered. According to the method, the boundary of the alignment point cloud Pr is extracted by means of an alpha shape algorithm, and redundant point clouds are judged and filtered through a spatial relationship. Since the related point cloud is planar, the boundary extraction is performed by using an alpha shape Ping Miandian cloud extraction boundary method.
Setting a rolling sphere radius as r, wherein r=2.Deltal, and randomly selecting a point p in the point cloud 0 In p 0 Drawing a circle c with radius of 2r for the center, and marking the rest points in the circle c as p 2r_cir The method comprises the steps of carrying out a first treatment on the surface of the Then optionally a point p in the circle 1 Obtaining p 0 ,p 1 Two circles c with radius r 1 ,c 2 In circle c 1 ,c 2 Internal removal of p 0 ,p 1 The remaining points of (2) are denoted as p 1r_cir . Then calculating the distance between the points in the two circles and the two circle centers, and respectively obtaining the distances between the points and c 1 ,c 2 Distance vector d of (2) 1 ,d 2 . If the following formula is true, then p can be determined 1 As boundary point, then start the next round of circulation, i.e. p 0 :=p 1 The algorithm steps above are restarted.
min(d 1 )<r or min(d 2 )<r
Otherwise, at p 1r_cir Internal selection of new p 1 The operation of the above formula is repeated.
The boundary point cloud of the alignment point cloud Pr can be obtained through an alpha shape algorithm, and as shown in fig. 3, the 3D camera faces the box plane to shoot every time the three-dimensional point cloud of the box is captured. The depth (z-axis coordinates) of the captured bin point cloud is approximately the same, meaning the boundary point cloud P b And differential point cloud P d As shown in fig. 4, the depth of the boundary point cloud is substantially the same, so that the x and y coordinates of the boundary point cloud are used to screen the differential point cloud and detect the complete boundary point cloud P b
Computing boundary point cloud P b Centroid p of (2) bc Traversing the boundary point cloud p in turn bk Search for p bk At differential point cloud P d The nearest point p in (a) dk Calculating a vector
Figure BDA0003780861020000141
Know->
Figure BDA0003780861020000142
Figure BDA0003780861020000143
wherein ,
Figure BDA0003780861020000149
respectively representing the x, y, z coordinates of the corresponding point.
The judgment of whether the points in the differential point cloud are all in the boundary point cloud can be based on vectors
Figure BDA0003780861020000144
and />
Figure BDA0003780861020000145
The size of (2) is the condition, the judgment relation is as follows:
Figure BDA0003780861020000146
by filtering the differential point cloud P d Obtaining a deformed point cloud P de . Calculating the centroid of the deformed point cloud to represent the three-dimensional position of the deformed point cloud, namely:
Figure BDA0003780861020000147
wherein ,Pde Represents a deformed point cloud, and n represents a differential point cloud P de The number of midpoints, x i ,y i ,z i Respectively represent the deformation point clouds P de X, y, z coordinates of (c).
By means of the conversion relation between the image pixel coordinate system and the camera coordinate system, the two-dimensional coordinate position of the image can be obtained from the three-dimensional space position, and the conversion relation is as follows:
Figure BDA0003780861020000148
where P (u, v) represents a pixel point of an image pixel coordinate system with an abscissa of u and an ordinate of v, f is a focal length of the camera as a known value, dx and dy represent an actual width and length of each pixel, and P (x c ,y c ,z c ) Representing the coordinates x in the camera coordinate system c ,y c ,z c Is a three-dimensional space point of (c).
So far, the deformation area of the box body is positioned.
The detection of the box body state can be completed through the steps, and if the box body state is deformed, an alarm and a fault position point are sent out for maintenance of a to-be-overhauled personnel.
The invention provides a novel and complete detection flow for whether the bottom box of the subway train is deformed or not, the train inspection robot automatically navigates to a detection point, two-dimensional images and three-dimensional point cloud data of a part to be detected are acquired through the height and the angle which are adjusted by a 3D industrial camera carried on the mechanical arm, the degree of freedom of data acquisition and the richness of data acquisition are greatly improved, and strong data support is provided for subsequent algorithm realization, test and adjustment.
The invention introduces a depth convolution model and improves generalization and robustness of the algorithm model. The PP-PicoDet detection model can detect various vehicle bottom box bodies, including an auxiliary brake box, an AB box, a PA box, a PH box and the like, and has good generalization. The PP-PicoDet detection model can accurately detect the target to be detected when the target to be detected is affected by stain contamination, brightness change and the like. Compared with the traditional template matching algorithm, the robustness is remarkably improved. In addition, the PP-PicoDet detection model adopts an anchor-free strategy, improves Head and Neck parts, and greatly reduces training and reasoning time. Through practical tests, a 1944 multiplied by 1200 pixel image is detected under a hardware platform of 2080Ti 8G, the real-time detection effect can be achieved only by about 0.02s, and the accuracy of target detection reaches 97%.
The MLESAC segmentation algorithm designed by the invention is used for detecting the box body plane by using the MLESAC algorithm to obtain the point cloud with consistency in the point cloud block. The MLESAC can detect the globally optimal plane model parameters, eliminates noise points and extra-local points, suppresses the influence caused by the acquisition angle of the camera, and improves the robustness of plane segmentation.
The invention realizes that the target point cloud and the template point cloud are placed in the same reference system by means of SAC-IA and nonLinear ICP algorithm in point cloud registration, downsamples the target point cloud, and obtains matching points of the target point cloud and the template point cloud to perform similarity calculation. Compared with the conventional object surface deformation detection algorithm grid method and digital speckle correlation method, the method subtracts the steps of prefabricating grids on the surface of an object or generating speckles and the like, and has rapidness and generalization.
And in the deformation detection process of the three-dimensional point cloud, testing the acquired actual data of a large number of bottom boxes of the Beijing opera train. In the actual test, the actual deformation box point cloud is detected, and all samples can be judged to be correct. In order to verify the accuracy of the algorithm, the error calculation of the next step is needed, but the deformation point cloud data of the deformation box body is considered to be incapable of being directly obtained, so that the real point cloud coordinates of the deformation area are obtained one by one in a point cloud calculation deformation amount mode, and the deformation amount in the depth direction (z coordinate direction) is obtained by using a point cloud plane model. The experimental data were then tested using the proposed deformation detection model. Fig. 5 is a schematic diagram of a real calculation average value and a model measurement average value of deformation amount of deformation points of different deformation boxes, and the calculation error of a proposed deformation detection model obtained according to an experimental result is within 3.5mm, so that the requirements of actual train box deformation detection standards can be met.
The rapid and accurate deformation detection of the bottom box body of the train can timely find out the box body faults and facilitate subsequent maintenance, so that the safe operation of the subway train is ensured, and the intelligent development of the train fault diagnosis technology is promoted
Although the present invention has been described with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements and changes may be made without departing from the spirit and principles of the present invention.

Claims (3)

1. The train part deformation detection method based on the three-dimensional point cloud is characterized by comprising the following steps of:
s1, data acquisition: the train inspection trolley is automatically positioned to a designated position, and a two-dimensional image and a three-dimensional point cloud of a box body are acquired through a three-dimensional industrial camera on the mechanical arm;
s2, a PP-PicoDet detection model is adopted to combine the mapping relation between the two-dimensional image and the three-dimensional point cloud, and box point clouds in the three-dimensional point cloud space are positioned;
s3, dividing the box point cloud based on depth information and MLESAC region division algorithm to obtain a target point cloud P of the box plane o The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following steps:
s31: filtering out a large amount of environmental point clouds except the box body by utilizing the point cloud depth information;
s32: preprocessing point cloud data by adopting a point cloud bilateral filtering method based on normal, and further removing environmental point clouds;
s33: detecting a global optimal plane model in the point cloud set by using an MLESAC algorithm, and further removing external points of the environmental point cloud and outputting internal points conforming to the required plane model;
s4, a target point cloud P of the box body to be obtained based on SAC-IA and nonLinear ICP algorithm o And template point cloud P pre-stored in database t Registering specifically comprises the following steps: template point cloud P using SAC-IA t And target point cloud P o Performing coarse registration to obtain a primary transformation matrix between two point clouds, and performing fine registration by using a nonLinear ICP algorithm;
s5, judging the deformation of the box body; the method specifically comprises the following steps:
s51: obtaining a target point cloud P o And template point cloud P t Matching point logarithm N of (2) p And according toTarget point P o Matching point logarithmic threshold T for quantity setting of clouds match ,T match N/λ, N is the target point cloud P o The number of midpoints;
s52: calculating root mean square RMSE between the box body template point cloud and the target point cloud matching point, and passing through T match And the two evaluation parameters of the RMSE finish logic judgment of the deformation of the box body; when matching the number N of points p Less than threshold T match The box body is deformed, and the similarity degree of the box body is low; when matching the number N of points p Greater than threshold T match When the method is used, the RMSE parameters are used for further judgment, and the deformation threshold T of the RMSE and the history data and standard data records is used for R Comparing and judging the deformation state of the box body; RMSE is above threshold T R When the two-point cloud similarity is low, the box body is deformed and is lower than a threshold value T R The two-point cloud is high in similarity degree, and the surface of the box body is normal;
Figure FDA0004210026220000021
wherein ,Np The matching point logarithm of the target point cloud and the template point cloud;
s6, deformation positioning of the box body; the method specifically comprises the following steps:
s61: determining whether positioning is performed according to the deformation judgment in the previous step, if the deformation occurs, performing step S62, otherwise ending the current detection task;
s62: for saved template point cloud P o And registering the point cloud P r Performing differential processing to obtain differential point cloud P containing deformed point cloud d
S63: extracting the registration point cloud P by adopting alpha shape algorithm r Setting an alpha shape algorithm parameter rolling radius r=2·Δl, and detecting a complete boundary point cloud P b
S64: computing boundary point cloud P b Centroid p of (2) bc Construction of p bc To any point p of the boundary point cloud bk and pbc Distance p bk Nearest differential point cloud p dk Is a vector of (2)Measuring amount
Figure FDA0004210026220000022
and />
Figure FDA0004210026220000023
Figure FDA0004210026220000024
wherein ,
Figure FDA0004210026220000025
respectively representing x, y and z coordinates of the corresponding points;
judging whether points in the differential point cloud are all in the boundary point cloud or not, and according to vectors
Figure FDA0004210026220000026
and />
Figure FDA0004210026220000027
The size of (2) is the condition, the judgment relation is as follows:
Figure FDA0004210026220000031
filtering out the out-of-limit points to obtain a deformed point cloud;
calculating the mass center of the deformed point cloud after the deformed point cloud is obtained in the step S64 to obtain the three-dimensional position P of the deformed point cloud de_c The calculation formula is as follows:
Figure FDA0004210026220000032
wherein ,Pde Represents a deformed point cloud, and n represents a differential point cloud P de The number of midpoints, x i ,y i ,z i Respectively represent the deformation point clouds P de X, y, z coordinates of (c);
according to the mapping relation between the two-dimensional image and the three-dimensional point cloud, obtaining the two-dimensional coordinate position of the image from the three-dimensional space position, and completing the deformation positioning of the box body, wherein the specific formula is as follows:
Figure FDA0004210026220000033
where P (u, v) represents a pixel point of an image pixel coordinate system with an abscissa of u and an ordinate of v, f is a focal length of the camera as a known value, dx and dy represent an actual width and length of each pixel, and P (x c ,y c ,z c ) Representing the coordinates x in the camera coordinate system c ,y c ,z c Is a three-dimensional space point of (c).
2. The three-dimensional point cloud-based train component deformation detection method according to claim 1, characterized by: the step S2 specifically includes the following steps:
s21: loading a trained PP-PicoDet target detection model, and carrying out target detection on the box body in a two-dimensional image to obtain two-dimensional coordinates of a box body boundary frame;
s22: mapping the detected two-dimensional coordinates into a three-dimensional point cloud to obtain a box target point cloud P o
3. The three-dimensional point cloud-based train component deformation detection method according to claim 1, characterized by: the step S4 specifically includes the following steps:
s41: setting the dimension delta l of the three-dimensional voxel grid to finish the template point cloud P t Downsampling, reducing the calculated amount of the detection process;
s42: coarse registration of the target point cloud and the template point cloud is carried out by utilizing an SAC-IA algorithm, and an initial rotation matrix and a translation vector are obtained;
s43: inputting an initial rotation matrix and a translation vector, and adopting a nonLinear ICP algorithm to carry out fine registration on two-point clouds to obtain a final rotation matrix and a translation vector and two pointsMatching point logarithm N between clouds p Wherein the point cloud obtained by spatially transforming the target point cloud is denoted as a registration point cloud P r
CN202210930378.6A 2022-08-03 2022-08-03 Train part deformation detection method based on three-dimensional point cloud Active CN115482195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210930378.6A CN115482195B (en) 2022-08-03 2022-08-03 Train part deformation detection method based on three-dimensional point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210930378.6A CN115482195B (en) 2022-08-03 2022-08-03 Train part deformation detection method based on three-dimensional point cloud

Publications (2)

Publication Number Publication Date
CN115482195A CN115482195A (en) 2022-12-16
CN115482195B true CN115482195B (en) 2023-06-20

Family

ID=84423073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210930378.6A Active CN115482195B (en) 2022-08-03 2022-08-03 Train part deformation detection method based on three-dimensional point cloud

Country Status (1)

Country Link
CN (1) CN115482195B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385354B (en) * 2023-02-15 2023-09-29 北京瓦特曼智能科技有限公司 Method, equipment and medium for detecting deviation out of range of round billet
CN115953400B (en) * 2023-03-13 2023-06-02 安格利(成都)仪器设备有限公司 Corrosion pit automatic detection method based on three-dimensional point cloud object surface
CN116433881B (en) * 2023-06-12 2023-10-13 合肥联宝信息技术有限公司 Two-dimensional image acquisition method and device, electronic equipment and storage medium
CN116968757A (en) * 2023-09-06 2023-10-31 深圳市哲思特科技有限公司 New energy automobile safe driving method and system based on intelligent internet of things
CN117173635A (en) * 2023-09-20 2023-12-05 北京城建集团有限责任公司 Electromechanical module transportation tracking method and system
CN118429350B (en) * 2024-07-05 2024-08-27 山东博昂信息科技有限公司 Machine vision-based vehicle bottom detection method, system, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976312A (en) * 2016-05-30 2016-09-28 北京建筑大学 Point cloud automatic registering method based on point characteristic histogram
CN109099852A (en) * 2018-07-11 2018-12-28 上海大学 Structural fault detection method and system for measuring relative deformation of wind turbine blade
CN111489425A (en) * 2020-03-21 2020-08-04 复旦大学 Brain tissue surface deformation estimation method based on local key geometric information
CN113642681A (en) * 2021-10-13 2021-11-12 中国空气动力研究与发展中心低速空气动力研究所 Matching method of aircraft model surface mark points

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663367B (en) * 2012-04-16 2013-12-18 电子科技大学 Three-dimensional face identification method on basis of simulated annealing algorithm
CN105310776B (en) * 2014-12-02 2018-07-24 复旦大学 A kind of soft tissue surfaces deformation method for tracing based on sub-block
CN105809668B (en) * 2016-01-15 2018-10-02 武汉武大卓越科技有限责任公司 The body surface deformation behaviour extracting method of three-dimensional point cloud is scanned based on line
WO2017120897A1 (en) * 2016-01-15 2017-07-20 武汉武大卓越科技有限责任公司 Object surface deformation feature extraction method based on line scanning three-dimensional point cloud
US10657707B1 (en) * 2017-01-09 2020-05-19 State Farm Mutual Automobile Insurance Company Photo deformation techniques for vehicle repair analysis
US11043026B1 (en) * 2017-01-28 2021-06-22 Pointivo, Inc. Systems and methods for processing 2D/3D data for structures of interest in a scene and wireframes generated therefrom
WO2018208791A1 (en) * 2017-05-08 2018-11-15 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
CN108053473A (en) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 A kind of processing method of interior three-dimensional modeling data
CN110095060A (en) * 2019-03-12 2019-08-06 中建三局第一建设工程有限责任公司 Steel construction rapid quality detection method based on 3-D scanning technology
CN109919984A (en) * 2019-04-15 2019-06-21 武汉惟景三维科技有限公司 A kind of point cloud autoegistration method based on local feature description's
CN110992427B (en) * 2019-12-19 2023-10-13 深圳市华汉伟业科技有限公司 Three-dimensional pose estimation method and positioning grabbing system for deformed object
CN111127638B (en) * 2019-12-30 2023-04-07 芜湖哈特机器人产业技术研究院有限公司 Method for realizing positioning and grabbing point of protruding mark position of workpiece by using three-dimensional template library
CN112270698B (en) * 2019-12-31 2024-02-27 山东理工大学 Non-rigid geometric registration method based on nearest curved surface
CN113516660B (en) * 2021-09-15 2021-12-07 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
CN114186632B (en) * 2021-12-10 2023-04-18 北京百度网讯科技有限公司 Method, device, equipment and storage medium for training key point detection model
CN114037703B (en) * 2022-01-10 2022-04-15 西南交通大学 Subway valve state detection method based on two-dimensional positioning and three-dimensional attitude calculation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976312A (en) * 2016-05-30 2016-09-28 北京建筑大学 Point cloud automatic registering method based on point characteristic histogram
CN109099852A (en) * 2018-07-11 2018-12-28 上海大学 Structural fault detection method and system for measuring relative deformation of wind turbine blade
CN111489425A (en) * 2020-03-21 2020-08-04 复旦大学 Brain tissue surface deformation estimation method based on local key geometric information
CN113642681A (en) * 2021-10-13 2021-11-12 中国空气动力研究与发展中心低速空气动力研究所 Matching method of aircraft model surface mark points

Also Published As

Publication number Publication date
CN115482195A (en) 2022-12-16

Similar Documents

Publication Publication Date Title
CN115482195B (en) Train part deformation detection method based on three-dimensional point cloud
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN109615654B (en) Method for measuring corrosion depth and area of inner surface of drainage pipeline based on binocular vision
KR20020077420A (en) Method for automatically detecting casting defects in a test piece
CN111982916A (en) Welding seam surface defect detection method and system based on machine vision
US20210312609A1 (en) Real-time traceability method of width of defect based on divide-and-conquer
CN106996748A (en) Wheel diameter measuring method based on binocular vision
CN108921164B (en) Contact net locator gradient detection method based on three-dimensional point cloud segmentation
Ahmed et al. Pothole 3D reconstruction with a novel imaging system and structure from motion techniques
Zhang et al. Stud pose detection based on photometric stereo and lightweight YOLOv4
CN111768417B (en) Railway wagon overrun detection method based on monocular vision 3D reconstruction technology
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN113393439A (en) Forging defect detection method based on deep learning
CN114037703B (en) Subway valve state detection method based on two-dimensional positioning and three-dimensional attitude calculation
CN110334727B (en) Intelligent matching detection method for tunnel cracks
CN116152697A (en) Three-dimensional model measuring method and related device for concrete structure cracks
CN110517323A (en) 3 D positioning system and method based on manipulator one camera multi-vision visual
Frank et al. Stereo-vision for autonomous industrial inspection robots
Molleda et al. A profile measurement system for rail manufacturing using multiple laser range finders
Li et al. Vehicle seat detection based on improved RANSAC-SURF algorithm
Sun et al. Precision work-piece detection and measurement combining top-down and bottom-up saliency
Belhaoua et al. Estimation of 3D reconstruction errors in a stereo-vision system
CN114004812A (en) Threaded hole detection method and system adopting guide filtering and neural network model
CN112233063A (en) Circle center positioning method for large-size round object
CN116481460B (en) Apparent pit defect size detection method based on three-dimensional reconstruction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant