CN116518864A - Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis - Google Patents
Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis Download PDFInfo
- Publication number
- CN116518864A CN116518864A CN202310367251.2A CN202310367251A CN116518864A CN 116518864 A CN116518864 A CN 116518864A CN 202310367251 A CN202310367251 A CN 202310367251A CN 116518864 A CN116518864 A CN 116518864A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- deformation
- registration
- clouds
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 72
- 230000009466 transformation Effects 0.000 claims abstract description 52
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 44
- 239000011159 matrix material Substances 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 40
- 230000011218 segmentation Effects 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 12
- 238000001914 filtration Methods 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims description 14
- 238000011156 evaluation Methods 0.000 claims description 13
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 4
- 230000033001 locomotion Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 239000002689 soil Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/16—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
- G01B11/167—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge by projecting a pattern on the object
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/16—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Length Measuring Devices With Unspecified Measuring Means (AREA)
Abstract
The invention relates to an engineering structure full-field deformation detection method based on three-dimensional point cloud contrast analysis, which comprises the following steps: selecting image control points arranged on the surface of the structure as characteristic points, and acquiring an image sequence of the structure to finish reconstruction of a three-dimensional point cloud model and obtain an initial point cloud model; creating a spatial index for the initial point cloud model, performing noise filtering and downsampling to obtain a point cloud model convenient to calculate, and performing clustering segmentation to obtain a local point cloud model of the structure; the method comprises the steps of roughly calculating initial affine transformation matrixes of two groups of local point clouds based on a RANSAC algorithm, determining a fine affine transformation matrix by using point clouds which are not downsampled based on an ICP algorithm, and obtaining a local point cloud registration model; and determining a structural deformation value by adopting a deformation detection method based on a coordinate system, a deformation detection method based on point cloud registration or a deformation detection method based on fusion of the coordinate system and the point cloud registration based on the local point cloud registration model. Compared with the prior art, the invention has the advantages of quick and accurate detection and the like.
Description
Technical Field
The invention relates to the technical field of building deformation extraction, in particular to an engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis.
Background
Under the conditions of load action, continuous environmental action, sudden disaster invasion and the like in construction or service stages, the engineering structure often generates key state changes such as deformation, displacement, inclination, torsion and the like, and further influences the overall durability and safety of the structure. Aiming at quantitative detection of deformation of an engineering structure, the traditional method mainly adopts methods of arranging sensors, high-precision measuring instruments (such as a level gauge, a total station, and the like) and the like to continuously observe key points of the engineering structure for multiple times, so that key states are obtained.
The detection means commonly used at present have the following problems:
(1) Under complex engineering field conditions, the measurement efficiency and the data quality cannot be ensured especially in post-disaster emergency evaluation, and the technical staff has large workload and high labor intensity and has certain danger.
(2) Engineering structures tend to have huge volume, traditional measurement means depending on surveying instruments can only perform discrete point observation on key deformation states of the engineering structures, observation blind areas exist, full-field deformation information of buildings cannot be obtained, and certain potential safety hazards exist for detection and monitoring of special key nodes.
From the viewpoints of 'the safety of detection personnel, the economy of detection means, the global property of detection results' and the like, the conventional deformation detection evaluation technology of the single-mode engineering structure at present cannot meet the requirements of rapidness, accuracy, high efficiency, low consumption, field condition adaptability and the like.
Disclosure of Invention
The invention aims to provide an engineering structure full-field deformation detection method based on three-dimensional point cloud contrast analysis, which is used for integrating a close-up photogrammetry technology and a three-dimensional point cloud processing algorithm, and realizing full-field detection by taking the close-up photogrammetry technology of an unmanned aerial vehicle as a non-contact detection medium; the point cloud processing algorithm is used as an intelligent state sensing means, the rapid detection and accurate evaluation of the key state of the engineering structure are used as targets, and the deformation of the engineering structure is used as the key state, so that the rapid evaluation and quantification of the key state of the engineering structure are realized.
The aim of the invention can be achieved by the following technical scheme:
an engineering structure full-field deformation detection method based on three-dimensional point cloud contrast analysis comprises the following steps:
s1, generating point cloud data: selecting artificial image control points and/or natural image control points arranged on the surface of the structure as characteristic points, carrying out image sequence acquisition on the structure, and completing three-dimensional point cloud model reconstruction based on multi-view optical images to obtain an initial point cloud model;
s2, preprocessing a point cloud model: creating a spatial index for the initial point cloud model, performing noise filtering and downsampling to obtain a point cloud model convenient to calculate, and performing clustering segmentation on the point cloud model to obtain a local point cloud model of the structure;
s3, multipoint cloud data registration: coarsely calculating affine transformation matrixes of two groups of local point clouds based on a RANSAC algorithm to obtain an initial affine transformation matrix, determining a fine affine transformation matrix by using point clouds which are not downsampled based on an ICP algorithm, and carrying out fine registration on the point clouds to obtain a local point cloud registration model of the two groups of point clouds under the same spatial dimension of the fine registration;
s4, detecting point cloud deformation: based on the local point cloud registration model, a deformation detection method based on a coordinate system, a deformation detection method based on point cloud registration or a deformation detection method based on fusion of the coordinate system and the point cloud registration is adopted for three different scenes to determine structural deformation values.
The step S1 comprises the following steps:
s101, collecting an image sequence:
based on a method of approaching photogrammetry, artificial image control points and/or natural image control points are laid, a flight route of the unmanned aerial vehicle is planned, and a high-resolution image sequence of a target is obtained;
s102, generating point cloud data:
and (3) based on the high-resolution image sequence of the target, carrying out three-dimensional reconstruction of the point cloud by adopting a motion restoration structure method, and obtaining an initial three-dimensional point cloud model.
The step S2 comprises the following steps:
s201, creating a spatial index for an initial point cloud model;
s202, noise filtering:
denoising the obtained structured point cloud model based on a neighborhood algorithm to obtain a denoised point cloud model;
s203, point cloud downsampling:
performing point cloud downsampling on the denoised point cloud model based on a voxel method, and preserving geometric structural characteristics of the point cloud while reducing the number of three-dimensional points;
s204, clustering and segmentation of point clouds:
s2041, aiming at a downsampled point cloud model, performing primary clustering segmentation of the point cloud by adopting a RANSAC algorithm, and obtaining a roughly extracted local analysis point cloud by fitting a digital model;
s2042, after obtaining the rough extraction local analysis point cloud data, carrying out fine extraction of the local analysis point cloud, and specifically comprising the following steps: determining a height threshold value of the structural wall surface, completing point cloud segmentation according to the projection density of the point cloud along a specific direction, removing discrete noise points based on a Kmeans clustering algorithm, and completing fine extraction of the structural local analysis point cloud to obtain a structural local point cloud model.
The step S3 comprises the following steps:
s301, determining an affine transformation matrix:
under the condition that the relative pose of the point clouds is completely unknown, carrying out global search matching based on the RANSAC algorithm, and roughly calculating initial affine transformation matrixes of two groups of point clouds;
after an initial affine transformation matrix is obtained, rigid transformation between two point clouds is estimated by minimizing a distance difference in an iterative mode based on an ICP algorithm, and fine registration is carried out on the point clouds which are not downsampled to obtain a fine affine transformation matrix;
s302, registering point cloud data:
and carrying out iterative computation on the point cloud data based on the fine affine transformation matrix to obtain fused point cloud data under the same space dimension, namely a local point cloud registration model with a space coordinate system.
The RANSAC algorithm in S301 specifically includes the following steps:
estimating normal vector of each point in the two groups of point clouds based on the finely extracted local point clouds, calculating FPFH (field programmable gate array) characteristics of each point based on the normal vector, and acquiring geometrical characteristic data of the point clouds;
based on geometrical characteristic data of point clouds, carrying out global registration through a RANSAC algorithm, randomly selecting n random points from a source point cloud P, inquiring nearest neighbors in a 33-dimensional FPFH characteristic space, detecting corresponding points of the nearest neighbors in a target point cloud Q, carrying out repeated loop iteration on a registration point pair, selecting an optimal result according to an error minimum principle, and calculating to obtain an initial affine transformation matrix.
The initial affine transformation matrix comprises a rotation matrix R and a translation matrix t:
P={p 1 ,p 2 ,…,p n },Q={q 1 ,q 2 ,…,q n }
Q=RP+t
wherein P is a source point cloud, Q is a target point cloud, and n is the number of selected random points.
The ICP algorithm in S301 specifically includes the following steps:
transforming the source point cloud into a coordinate system of the target point cloud based on the initial affine transformation matrix to finish initialization;
calculating the difference between the source point cloud and the target point cloud, and taking the difference as an evaluation result:
based on an ICP algorithm, setting a distance threshold, and when the distance between corresponding points in the two groups of point clouds is smaller than the threshold, taking the corresponding points as corresponding points, so that n pairs of new registration points corresponding to the two groups of P and Q one by one are respectively obtained in the two groups of point clouds;
and updating the affine transformation matrix based on the new registration point pair, and repeating the steps until the evaluation result meets a preset threshold value to obtain a precise affine transformation matrix for precise registration.
The step S4 comprises the following steps:
s401, calculating different point cloud overlapping areas:
calculating boundary frames of different point clouds based on the local point cloud registration model, and taking intersection of the boundary frames to obtain a point cloud overlapping region;
s402, detecting point cloud deformation:
after the extraction of the overlapping areas of the two groups of point clouds is completed, respectively adopting a deformation detection method based on a coordinate system, a deformation detection method based on point cloud registration or a deformation detection method based on the fusion of the coordinate system and the point cloud registration for three different scenes to quantify the deformation value of the overlapping areas of the structural point clouds.
The step S401 specifically includes the following steps:
after two groups of point clouds are subjected to point cloud registration transformation based on a fine affine transformation matrix to realize a coordinate system, calculating the maximum coordinate and the minimum coordinate of the point clouds respectively to obtain the range of the point clouds, and creating a boundary frame;
based on the bounding boxes of the two groups of point clouds, the union of the bounding boxes is taken in the deformation direction of the point clouds so as to ensure that the point clouds in the overlapping area of the two groups of point clouds are completely reserved, and the intersection is taken in the other two directions so as to obtain the overlapping area of the point clouds.
The deformation detection method based on the coordinate system specifically comprises the following steps: after verifying the boundary range of the overlapping area of the two groups of point clouds, based on the three-dimensional coordinates of the two groups of point clouds, realizing the point cloud data latticed by a grid slicing method, and calculating the unidirectional C2C distance of the wall surface of the structure to obtain a structural deformation value;
the deformation detection method based on the point cloud registration specifically comprises the following steps: calculating coordinate differences among the point clouds according to the fine affine transformation matrix to obtain a structural deformation value;
the deformation detection method based on the registration fusion of the coordinate system and the point cloud specifically comprises the following steps: obtaining local unidirectional deformation of the structure by using a deformation detection method based on a coordinate system, and obtaining rigid body displacement values between two groups of point clouds by using a deformation detection method based on point cloud registration; and superposing the local unidirectional deformation and the local rigid body displacement value of the structure to obtain a more accurate structural deformation detection result.
Compared with the prior art, the invention has the following beneficial effects:
(1) The accuracy is high: the accuracy of the invention in the deformation extraction of the foundation pit engineering support wall is verified, the deformation value extracted by the invention has little difference from the true value, and the invention is compared and analyzed with the reference deformation result of the project embedded inclinometry monitoring point, and the result shows that the three deformation detection methods can reach millimeter-level precision when being optimal, the local detection precision can reach sub millimeter level, thereby realizing the automatic detection of the global deformation of the engineering structure surface and meeting the engineering practice requirement to a certain extent.
(2) The applicability is wide: the invention adopts the photogrammetry technology, can provide a means with low cost and accurate measurement for the key state evaluation of the engineering structure, is assisted with an image control point control model coordinate system, ensures accurate scale, and can obtain a high-precision three-dimensional point cloud model in a scene with weak satellite positioning signals.
(3) The data volume is full: the method realizes quantitative detection of continuous full-field deformation of the engineering structure based on the three-dimensional model, and compared with a two-dimensional image, the multi-dimensional information of the three-dimensional point cloud can display the actual scene more truly and intuitively, so that complete data assurance is provided for the key performance evaluation result.
(4) And (3) calculating quickly: compared with the traditional deformation identification method, the method is quicker and more accurate, has field condition self-adaptability, can accurately acquire the global deformation of the structure, and realizes the rapid evaluation and quantification of the key state of the engineering structure.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a flow chart of point cloud data generation;
FIG. 3 is a schematic diagram of a point cloud deformation detection flow;
FIG. 4 is a graph of wall surface deformation detection of a foundation pit in one embodiment, wherein (a) is a coordinate system based wall surface deformation thermodynamic diagram, (b) is a registration based wall surface deformation thermodynamic diagram, and (c) is a coordinate system and registration fusion based wall surface deformation thermodynamic diagram;
FIG. 5 is a graph showing the comparison of inclinometer data and deformation of each deformation detection method according to one embodiment.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
The embodiment provides a method for detecting full-field deformation of an engineering structure based on three-dimensional point cloud contrast analysis, which is shown in fig. 1 and comprises the following steps:
s1, generating point cloud data
As shown in fig. 2, the method comprises the following steps:
s101, collecting an image sequence:
before image data are collected, live-action investigation is conducted on a mapping target, points which are naturally occurring in a scene and are not easy to generate space mapping errors are selected as natural image control points, artificial layout targets are taken as artificial image control points, and three-dimensional coordinate information of the image control points is collected;
based on a method of close-up photogrammetry, image control points are arranged, a flight route of the unmanned aerial vehicle is planned, and a high-resolution image sequence of a target is obtained.
S102, generating point cloud data:
and (3) based on the high-resolution image sequence of the target, carrying out three-dimensional reconstruction of the point cloud by adopting a motion restoration structure method, and obtaining an initial three-dimensional point cloud model.
The three-dimensional point cloud data has the characteristics of disorder, sparsity, unstructured and the like, so that even though the point cloud data contains information such as three-dimensional space attributes of objects, the three-dimensional point cloud data can only extract needed information from the three-dimensional space attributes through a series of algorithm processing, such as point cloud preprocessing, point cloud segmentation, point cloud registration and the like.
S2-S4 are structural deformation detection steps, and the flow is shown in FIG. 3.
S2, preprocessing a point cloud model: creating a spatial index for the initial point cloud model, performing noise filtering and downsampling processing to obtain a point cloud model convenient to calculate, and performing clustering segmentation on the point cloud model to obtain a local point cloud model of the structure.
The point cloud model obtained based on three-dimensional reconstruction is difficult to directly extract key state quantization indexes of an engineering structure, and the aim of point cloud preprocessing is to generate high-quality point cloud for subsequent processing. Point cloud model preprocessing typically includes spatial index creation, noise filtering, downsampling, cluster segmentation, and the like. Specifically, the method comprises the following steps:
s201, creating a spatial index for the initial point cloud model
Performing large-granularity space division and index creation based on an Octree algorithm, dividing a space surrounded by a bounding box of a point cloud model into eight cubes (eight leaf nodes), deleting the cubes which do not contain any point set, and repeating the eight equal splitting steps on the cubes containing the point set until the edge length of the cubes is smaller than the granularity of a given leaf node;
then, carrying out subdivision index based on KDTre, calculating variance for the data of each dimension, and taking the dimension with the largest variance as an initial segmentation coordinate axis; starting from a certain point in the space, dividing the whole space into two parts by using a hyperplane perpendicular to the dividing coordinate axis, and then repeating the dividing steps in the two spaces respectively until all the space points are processed, so as to obtain a more ordered and structured three-dimensional point cloud.
S202, noise filtering:
denoising the obtained structured point cloud model based on a neighborhood algorithm to obtain a denoised point cloud model.
S203, point cloud downsampling:
and carrying out point cloud downsampling on the denoised point cloud model based on a voxel method, and preserving geometric structural characteristics of the point cloud while reducing the number of three-dimensional points.
S204, clustering and segmentation of point clouds:
s2041, aiming at a downsampled point cloud model, performing primary clustering segmentation of the point cloud by adopting a RANSAC algorithm, and obtaining a roughly extracted local analysis point cloud by fitting a digital model;
s2042, after obtaining the rough extraction local analysis point cloud data, carrying out fine extraction of the local analysis point cloud, and specifically comprising the following steps: determining a height threshold value of the structural wall surface, completing point cloud segmentation according to the projection density of the point cloud along a specific direction, removing discrete noise points based on a Kmeans clustering algorithm, and completing fine extraction of the structural local analysis point cloud to obtain a structural local point cloud model.
S3, multipoint cloud data registration
In computer vision, pattern recognition, and robotics, point cloud registration, also known as or scan matching, is the process of finding spatial transformations (e.g., scaling, rotation, and translation) that align two point clouds. The objective of finding such a transformation includes merging multiple datasets into one globally consistent model (or coordinate system) and mapping new measurements to known datasets to identify features or estimate their pose.
The point cloud registration is also an important means for multi-source point cloud data fusion, can be used for small deformation detection of the same object surface, and is significant in meaning in point cloud processing. The point cloud registration algorithm mainly comprises a RANSAC algorithm and an ICP algorithm.
Specifically, the method comprises the following steps:
s301, determining an affine transformation matrix:
s3011, under the condition that the relative pose of the point clouds is completely unknown, performing global search matching based on a RANSAC algorithm, and roughly calculating initial affine transformation matrixes of the two groups of point clouds.
The RANSAC algorithm specifically includes the following steps:
estimating normal vector of each point in the two groups of point clouds based on the finely extracted local point clouds, calculating FPFH (field programmable gate array) characteristics of each point based on the normal vector, and acquiring geometrical characteristic data of the point clouds;
based on geometrical characteristic data of point clouds, carrying out global registration by using a RANSAC algorithm, randomly selecting n random points from a source point cloud P, inquiring nearest neighbors in a 33-dimensional FPFH characteristic space, detecting corresponding points of the point clouds in a target point cloud Q, carrying out repeated cyclic iteration on a registration point pair, selecting an optimal result according to an error minimum principle, and calculating to obtain an initial affine transformation matrix, wherein the initial affine transformation matrix comprises a rotation matrix R and a translation matrix t:
P={p 1 ,p 2 ,…,p n },Q={q 1 ,q 2 ,…,q n }
Q=RP+t
wherein P is a source point cloud, Q is a target point cloud, and n is the number of selected random points.
S3012, after an initial affine transformation matrix is obtained, rigid transformation between two point clouds is estimated by minimizing a distance difference in an iterative mode based on an ICP algorithm, and fine registration is carried out on the point clouds which are not downsampled, so that a fine affine transformation matrix is obtained.
The ICP algorithm specifically comprises the following steps:
transforming the source point cloud into a coordinate system of the target point cloud based on the initial affine transformation matrix to finish initialization;
calculating the difference between the source point cloud and the target point cloud, and taking the difference as an evaluation result:
based on an ICP algorithm, setting a distance threshold, and when the distance between corresponding points in the two groups of point clouds is smaller than the threshold, taking the corresponding points as corresponding points, so that n pairs of new registration points corresponding to the two groups of P and Q one by one are respectively obtained in the two groups of point clouds;
and updating the affine transformation matrix based on the new registration point pair, and repeating the steps until the evaluation result meets a preset threshold value to obtain a precise affine transformation matrix for precise registration.
S302, registering point cloud data:
and carrying out iterative computation on the point cloud data based on the fine affine transformation matrix to obtain fused point cloud data under the same space dimension, namely a local point cloud registration model with a space coordinate system.
S4, detecting point cloud deformation: based on the local point cloud registration model, a deformation detection method based on a coordinate system, a deformation detection method based on point cloud registration or a deformation detection method based on fusion of the coordinate system and the point cloud registration is adopted for three different scenes to determine structural deformation values.
Specifically, the method comprises the following steps:
s401, calculating different point cloud overlapping areas:
after two groups of point clouds are subjected to point cloud registration transformation based on a fine affine transformation matrix to realize a coordinate system, calculating the maximum coordinate and the minimum coordinate of the point clouds respectively to obtain the range of the point clouds, and creating a boundary frame;
based on the bounding boxes of the two groups of point clouds, the union of the bounding boxes is taken in the deformation direction of the point clouds so as to ensure that the point clouds in the overlapping area of the two groups of point clouds are completely reserved, and the intersection is taken in the other two directions so as to obtain the overlapping area of the point clouds.
S402, detecting point cloud deformation:
after the extraction of the overlapping areas of the two groups of point clouds is completed, respectively adopting a deformation detection method based on a coordinate system, a deformation detection method based on point cloud registration or a deformation detection method based on the fusion of the coordinate system and the point cloud registration for three different scenes to quantify the deformation value of the overlapping areas of the structural point clouds.
The deformation detection method based on the coordinate system specifically comprises the following steps: after the boundary range of the overlapping area of the two groups of point clouds is checked, based on the three-dimensional coordinates of the two groups of point clouds, the point cloud data is latticed by a grid slicing method, and the unidirectional C2C distance of the wall surface of the structure is calculated to obtain the structure deformation value.
The deformation detection method based on the point cloud registration specifically comprises the following steps: and calculating coordinate differences among the point clouds according to the fine affine transformation matrix to obtain a structural deformation value.
The deformation detection method based on the registration fusion of the coordinate system and the point cloud specifically comprises the following steps: obtaining local unidirectional deformation of the structure by using a deformation detection method based on a coordinate system, and obtaining rigid body displacement values between two groups of point clouds by using a deformation detection method based on point cloud registration; and superposing the local unidirectional deformation and the local rigid body displacement value of the structure to obtain a more accurate structural deformation detection result.
In order to verify the feasibility and accuracy of the invention, a certain in-building foundation pit supporting structure is taken as a research object, two image data acquisition tests are designed and implemented, and high-precision foundation pit wall point cloud data in the same area and at different times are respectively acquired through refined three-dimensional reconstruction. After the series of point cloud processing steps are completed, extracting to obtain non-interference foundation pit wall point cloud data, extracting overlapping areas of two groups of point clouds, and quantifying deformation of the overlapping areas of the foundation pit wall by using the three deformation detection methods provided by the invention respectively, wherein the result is shown in fig. 4.
Meanwhile, in the engineering project, in order to monitor the horizontal displacement information of the foundation pit soil body, a construction party follows the basic layout principle that one inclinometer pipe is embedded every 20m in a supporting wall body along the extending direction of the foundation pit according to the field condition, one measuring point is arranged every 0.5m downwards along the height in each inclinometer pipe, the horizontal displacement inside the soil body of each measuring point is measured and calculated by using an inclinometer every day, and the detection precision of the instrument is millimeter. In the test, a certain section is correspondingly provided with an embedded inclinometer ZQT, so that a relatively accurate deformation reference value can be provided for deformation accuracy verification of the method.
In order to verify the deformation detection precision of the above methods, the horizontal lateral deformation monitoring result of the inclinometer on the foundation pit wall supporting structure is regarded as a deformation reference value, the comparison result of each deformation detection method is shown in fig. 5, and the reference error between the specific detection method and the deformation reference value is shown in the following table 1.
Table 1 relative error between deformation detection results and inclinometer results
As can be seen from table 1, the results obtained by the deformation detection method based on registration substantially coincide with inclinometer data; the deformation detection results of the respective methods are similar in shape, but are offset in relative deformation amounts to different degrees.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by a person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.
Claims (10)
1. The engineering structure full-field deformation detection method based on three-dimensional point cloud contrast analysis is characterized by comprising the following steps of:
s1, generating point cloud data: selecting artificial image control points and/or natural image control points arranged on the surface of the structure as characteristic points, carrying out image sequence acquisition on the structure, and completing three-dimensional point cloud model reconstruction based on multi-view optical images to obtain an initial point cloud model;
s2, preprocessing a point cloud model: creating a spatial index for the initial point cloud model, performing noise filtering and downsampling to obtain a point cloud model convenient to calculate, and performing clustering segmentation on the point cloud model to obtain a local point cloud model of the structure;
s3, multipoint cloud data registration: coarsely calculating affine transformation matrixes of two groups of local point clouds based on a RANSAC algorithm to obtain an initial affine transformation matrix, determining a fine affine transformation matrix by using point clouds which are not downsampled based on an ICP algorithm, and carrying out fine registration on the point clouds to obtain a local point cloud registration model of the two groups of point clouds under the same spatial dimension of the fine registration;
s4, detecting point cloud deformation: based on the local point cloud registration model, a deformation detection method based on a coordinate system, a deformation detection method based on point cloud registration or a deformation detection method based on fusion of the coordinate system and the point cloud registration is adopted for three different scenes to determine structural deformation values.
2. The method for detecting the full-field deformation of the engineering structure based on the three-dimensional point cloud contrast analysis according to claim 1, wherein the step S1 comprises the following steps:
s101, collecting an image sequence:
based on a method of approaching photogrammetry, artificial image control points and/or natural image control points are laid, a flight route of the unmanned aerial vehicle is planned, and a high-resolution image sequence of a target is obtained;
s102, generating point cloud data:
and (3) based on the high-resolution image sequence of the target, carrying out three-dimensional reconstruction of the point cloud by adopting a motion restoration structure method, and obtaining an initial three-dimensional point cloud model.
3. The method for detecting the full-field deformation of the engineering structure based on the three-dimensional point cloud contrast analysis according to claim 1, wherein the step S2 comprises the following steps:
s201, creating a spatial index for an initial point cloud model;
s202, noise filtering:
denoising the obtained structured point cloud model based on a neighborhood algorithm to obtain a denoised point cloud model;
s203, point cloud downsampling:
performing point cloud downsampling on the denoised point cloud model based on a voxel method, and preserving geometric structural characteristics of the point cloud while reducing the number of three-dimensional points;
s204, clustering and segmentation of point clouds:
s2041, aiming at a downsampled point cloud model, performing primary clustering segmentation of the point cloud by adopting a RANSAC algorithm, and obtaining a roughly extracted local analysis point cloud by fitting a digital model;
s2042, after obtaining the rough extraction local analysis point cloud data, carrying out fine extraction of the local analysis point cloud, and specifically comprising the following steps: determining a height threshold value of the structural wall surface, completing point cloud segmentation according to the projection density of the point cloud along a specific direction, removing discrete noise points based on a Kmeans clustering algorithm, and completing fine extraction of the structural local analysis point cloud to obtain a structural local point cloud model.
4. The method for detecting full-field deformation of an engineering structure based on three-dimensional point cloud contrast analysis according to claim 1, wherein the step S3 comprises the following steps:
s301, determining an affine transformation matrix:
under the condition that the relative pose of the point clouds is completely unknown, carrying out global search matching based on the RANSAC algorithm, and roughly calculating initial affine transformation matrixes of two groups of point clouds;
after an initial affine transformation matrix is obtained, rigid transformation between two point clouds is estimated by minimizing a distance difference in an iterative mode based on an ICP algorithm, and fine registration is carried out on the point clouds which are not downsampled to obtain a fine affine transformation matrix;
s302, registering point cloud data:
and carrying out iterative computation on the point cloud data based on the fine affine transformation matrix to obtain fused point cloud data under the same space dimension, namely a local point cloud registration model with a space coordinate system.
5. The method for detecting full-field deformation of an engineering structure based on three-dimensional point cloud contrast analysis according to claim 4, wherein the RANSAC algorithm in S301 specifically comprises the following steps:
estimating normal vector of each point in the two groups of point clouds based on the finely extracted local point clouds, calculating FPFH (field programmable gate array) characteristics of each point based on the normal vector, and acquiring geometrical characteristic data of the point clouds;
based on geometrical characteristic data of point clouds, carrying out global registration through a RANSAC algorithm, randomly selecting n random points from a source point cloud P, inquiring nearest neighbors in a 33-dimensional FPFH characteristic space, detecting corresponding points of the nearest neighbors in a target point cloud Q, carrying out repeated loop iteration on a registration point pair, selecting an optimal result according to an error minimum principle, and calculating to obtain an initial affine transformation matrix.
6. The method for detecting full-field deformation of an engineering structure based on three-dimensional point cloud contrast analysis according to claim 5, wherein the initial affine transformation matrix comprises a rotation matrix R and a translation matrix t:
P={p 1 ,p 2 ,…p n },Q={q 1 ,q 2 ,…,q n }
Q=RP+t
wherein P is a source point cloud, Q is a target point cloud, and n is the number of selected random points.
7. The method for detecting full-field deformation of an engineering structure based on three-dimensional point cloud contrast analysis according to claim 6, wherein the ICP algorithm in S301 specifically includes the following steps:
transforming the source point cloud into a coordinate system of the target point cloud based on the initial affine transformation matrix to finish initialization;
calculating the difference between the source point cloud and the target point cloud, and taking the difference as an evaluation result:
based on an ICP algorithm, setting a distance threshold, and when the distance between corresponding points in the two groups of point clouds is smaller than the threshold, taking the corresponding points as corresponding points, so that n pairs of new registration points corresponding to the two groups of P and Q one by one are respectively obtained in the two groups of point clouds;
and updating the affine transformation matrix based on the new registration point pair, and repeating the steps until the evaluation result meets a preset threshold value to obtain a precise affine transformation matrix for precise registration.
8. The method for detecting full-field deformation of an engineering structure based on three-dimensional point cloud contrast analysis according to claim 1, wherein the step S4 comprises the following steps:
s401, calculating different point cloud overlapping areas:
calculating boundary frames of different point clouds based on the local point cloud registration model, and taking intersection of the boundary frames to obtain a point cloud overlapping region;
s402, detecting point cloud deformation:
after the extraction of the overlapping areas of the two groups of point clouds is completed, respectively adopting a deformation detection method based on a coordinate system, a deformation detection method based on point cloud registration or a deformation detection method based on the fusion of the coordinate system and the point cloud registration for three different scenes to quantify the deformation value of the overlapping areas of the structural point clouds.
9. The method for detecting full-field deformation of an engineering structure based on three-dimensional point cloud contrast analysis according to claim 8, wherein the step S401 specifically comprises the following steps:
after two groups of point clouds are subjected to point cloud registration transformation based on a fine affine transformation matrix to realize a coordinate system, calculating the maximum coordinate and the minimum coordinate of the point clouds respectively to obtain the range of the point clouds, and creating a boundary frame;
based on the bounding boxes of the two groups of point clouds, the union of the bounding boxes is taken in the deformation direction of the point clouds so as to ensure that the point clouds in the overlapping area of the two groups of point clouds are completely reserved, and the intersection is taken in the other two directions so as to obtain the overlapping area of the point clouds.
10. The method for detecting the full-field deformation of the engineering structure based on the three-dimensional point cloud contrast analysis according to claim 8, wherein the method for detecting the deformation based on the coordinate system is specifically as follows: after verifying the boundary range of the overlapping area of the two groups of point clouds, based on the three-dimensional coordinates of the two groups of point clouds, realizing the point cloud data latticed by a grid slicing method, and calculating the unidirectional C2C distance of the wall surface of the structure to obtain a structural deformation value;
the deformation detection method based on the point cloud registration specifically comprises the following steps: calculating coordinate differences among the point clouds according to the fine affine transformation matrix to obtain a structural deformation value;
the deformation detection method based on the registration fusion of the coordinate system and the point cloud specifically comprises the following steps: obtaining local unidirectional deformation of the structure by using a deformation detection method based on a coordinate system, and obtaining rigid body displacement values between two groups of point clouds by using a deformation detection method based on point cloud registration; and superposing the local unidirectional deformation and the local rigid body displacement value of the structure to obtain a more accurate structural deformation detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310367251.2A CN116518864A (en) | 2023-04-07 | 2023-04-07 | Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310367251.2A CN116518864A (en) | 2023-04-07 | 2023-04-07 | Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116518864A true CN116518864A (en) | 2023-08-01 |
Family
ID=87402106
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310367251.2A Pending CN116518864A (en) | 2023-04-07 | 2023-04-07 | Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116518864A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115140A (en) * | 2023-09-25 | 2023-11-24 | 重庆大学溧阳智慧城市研究院 | 3D printing concrete column surface crack detection method based on point cloud segmentation registration |
CN117726673A (en) * | 2024-02-07 | 2024-03-19 | 法奥意威(苏州)机器人系统有限公司 | Weld joint position obtaining method and device and electronic equipment |
CN118537381A (en) * | 2024-07-19 | 2024-08-23 | 水利部交通运输部国家能源局南京水利科学研究院 | Method for jointly calibrating water-to-water slope ratio of upstream dam slope of earth-rock dam in operation period |
-
2023
- 2023-04-07 CN CN202310367251.2A patent/CN116518864A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115140A (en) * | 2023-09-25 | 2023-11-24 | 重庆大学溧阳智慧城市研究院 | 3D printing concrete column surface crack detection method based on point cloud segmentation registration |
CN117115140B (en) * | 2023-09-25 | 2024-04-05 | 重庆大学溧阳智慧城市研究院 | 3D printing concrete column surface crack detection method based on point cloud segmentation registration |
CN117726673A (en) * | 2024-02-07 | 2024-03-19 | 法奥意威(苏州)机器人系统有限公司 | Weld joint position obtaining method and device and electronic equipment |
CN117726673B (en) * | 2024-02-07 | 2024-05-24 | 法奥意威(苏州)机器人系统有限公司 | Weld joint position obtaining method and device and electronic equipment |
CN118537381A (en) * | 2024-07-19 | 2024-08-23 | 水利部交通运输部国家能源局南京水利科学研究院 | Method for jointly calibrating water-to-water slope ratio of upstream dam slope of earth-rock dam in operation period |
CN118537381B (en) * | 2024-07-19 | 2024-09-24 | 水利部交通运输部国家能源局南京水利科学研究院 | Method for jointly calibrating water-to-water slope ratio of upstream dam slope of earth-rock dam in operation period |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xu et al. | Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor | |
Holgado‐Barco et al. | Semiautomatic extraction of road horizontal alignment from a mobile LiDAR system | |
CN116518864A (en) | Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis | |
CN104574393A (en) | Three-dimensional pavement crack image generation system and method | |
Xu et al. | A 3D reconstruction method for buildings based on monocular vision | |
CN112465732A (en) | Registration method of vehicle-mounted laser point cloud and sequence panoramic image | |
Kwak | Automatic 3D building model generation by integrating LiDAR and aerial images using a hybrid approach | |
Guo et al. | Extraction of dense urban buildings from photogrammetric and LiDAR point clouds | |
CN114494385A (en) | Visual early warning method for water delivery tunnel diseases | |
Hammoudi et al. | Extracting building footprints from 3D point clouds using terrestrial laser scanning at street level | |
Tsakiri et al. | Change detection in terrestrial laser scanner data via point cloud correspondence | |
Özdemir et al. | A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment | |
Zhao et al. | Intelligent segmentation method for blurred cracks and 3D mapping of width nephograms in concrete dams using UAV photogrammetry | |
Demir | Automated detection of 3D roof planes from Lidar data | |
Shan et al. | Feasibility of Accurate Point Cloud Model Reconstruction for Earthquake‐Damaged Structures Using UAV‐Based Photogrammetry | |
CN114136335A (en) | Aerial triangle precision analysis method based on unmanned aerial vehicle oblique photogrammetry | |
Ahmad et al. | Generation of three dimensional model of building using photogrammetric technique | |
CN117968631A (en) | Pavement subsidence detection method based on unmanned aerial vehicle DOM and satellite-borne SAR image | |
Ridene et al. | Registration of fixed-and-mobile-based terrestrial laser data sets with DSM | |
Chen et al. | Intelligent interpretation of the geometric properties of rock mass discontinuities based on an unmanned aerial vehicle | |
Lee et al. | Determination of building model key points using multidirectional shaded relief images generated from airborne LiDAR data | |
Jung et al. | Progressive modeling of 3D building rooftops from airborne Lidar and imagery | |
CN115713548A (en) | Automatic registration method for multi-stage live-action three-dimensional model | |
CN114255051A (en) | Authenticity inspection method of orthometric product based on stereo mapping satellite | |
Sadeq | Using total probability in image template matching. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |