CN113706591A - Point cloud-based surface weak texture satellite three-dimensional reconstruction method - Google Patents
Point cloud-based surface weak texture satellite three-dimensional reconstruction method Download PDFInfo
- Publication number
- CN113706591A CN113706591A CN202110874322.9A CN202110874322A CN113706591A CN 113706591 A CN113706591 A CN 113706591A CN 202110874322 A CN202110874322 A CN 202110874322A CN 113706591 A CN113706591 A CN 113706591A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- pose
- satellite
- point
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000005457 optimization Methods 0.000 claims abstract description 35
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 238000012216 screening Methods 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims description 57
- 238000009616 inductively coupled plasma Methods 0.000 claims description 29
- 230000009466 transformation Effects 0.000 claims description 24
- 238000013519 translation Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000008030 elimination Effects 0.000 claims description 6
- 238000003379 elimination reaction Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 3
- 239000004576 sand Substances 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a point cloud-based surface weak texture satellite three-dimensional reconstruction method. The point cloud-based surface weak texture satellite three-dimensional reconstruction method comprises a point cloud obtaining and preprocessing step, an inter-frame point cloud registration and key frame selection step, a loop detection and rear-end attitude optimization step and a model surface reconstruction step. The method comprises the steps of estimating the pose by utilizing the processed point cloud, screening the key frame point cloud according to the pose result, optimizing the pose, registering and fusing the point cloud, and finishing model surface reconstruction, so that the satellite three-dimensional reconstruction under the conditions of weak texture on the satellite surface and poor illumination condition can be realized, and a good foundation is provided for component identification and capture of the satellite.
Description
Technical Field
The invention relates to the technical field of three-dimensional reconstruction, in particular to a point cloud-based surface weak texture satellite three-dimensional reconstruction method.
Background
In space, before component identification and capturing are carried out on a satellite, three-dimensional reconstruction needs to be carried out on the satellite. However, due to the influence of factors such as poor illumination conditions in space and weak texture on the surface of the satellite, the satellite cannot be reconstructed by using a visual three-dimensional reconstruction scheme based on images.
Disclosure of Invention
The invention aims to provide a point cloud-based surface weak texture satellite three-dimensional reconstruction method, which is used for solving the technical problem that the satellite cannot be reconstructed by using an image-based visual three-dimensional reconstruction scheme at present.
In order to achieve the above object, an embodiment of the present invention provides a point cloud-based surface weak texture satellite three-dimensional reconstruction method, including the following steps:
point cloud obtaining and preprocessing, namely obtaining a single-frame point cloud of a satellite and preprocessing the point cloud;
performing FPFH (field programmable gate flash) feature coarse registration and ICP (inductively coupled plasma) fine registration on point clouds to acquire the accurate pose of the satellite, and then screening key frames; the FPFH characteristic rough registration is to put point cloud characteristics into a histogram in a unified mode to obtain a rough registration pose; the ICP fine registration is to take the coarse registration pose as an initial pose and iteratively obtain a fine registration pose;
performing pose graph updating and loop detection based on the selected key frame, wherein the loop detection is to find out a frame which is not adjacent to but close to the current key frame in the position from the historical key frame; judging whether loop appears or not, optimizing the pose graph when loop appears, and judging whether the registration of the point cloud between frames is finished or not after the pose graph is optimized and when the loop does not appear; when the registration of the point clouds between frames is not finished, returning to the point cloud obtaining and preprocessing step to obtain the point cloud of the next frame; when the registration of the point cloud between frames is finished, optimizing a pose graph; and
and (3) a model surface reconstruction step, namely after the pose graph is optimized, performing FPFH surface reconstruction to complete satellite three-dimensional modeling.
Further, the point cloud preprocessing step in the point cloud acquisition and preprocessing step comprises: for each frame of point cloud, firstly, according to the spatial position of the satellite, using spatial three-dimensional coordinate value screening to remove the background to obtain the point cloud only with the satellite; and then sequentially removing outliers and carrying out voxel filtering down-sampling on the point cloud of the satellite, and calculating a corresponding normal vector to finish the point cloud pretreatment.
Further, the inter-frame point cloud registration and key frame selection step comprises:
a point cloud data processing step, wherein a fixed local coordinate system is defined, a single-frame point cloud obtained firstly is a target point cloud, and a point cloud obtained later is a source point cloud;
the method comprises the following steps of performing rough registration on FPFH (field programmable gate flash) characteristics, namely extracting the FPFH characteristics of source point clouds and target point clouds in a local coordinate system, matching and adjusting the pose of the FPFH characteristics of the source point clouds, combining the FPFH characteristics of the source point clouds with the FPFH characteristics of the target point clouds to form corresponding characteristic point pairs, and after the corresponding characteristic point pairs are updated in an iterative mode, overlapping the positions of the corresponding characteristic point pairs to form a point fast characteristic histogram of a satellite as a final pose of the rough registration;
an ICP fine registration step, wherein in the local coordinate system, the final pose of the coarse registration is used as an initial pose, a satellite edge frame in the initial pose is identified as a historical key frame, a plane where the historical key frame is located is used as a reference plane to carry out coordinate transformation on the source point cloud, and the accurate pose of the satellite is formed as a fine registration pose after the preset iteration times are iteratively calculated or the distance between the point of the source point cloud and the reference plane is smaller than a distance threshold; and
and a key frame screening step, wherein the satellite edge frame is screened as the current key frame based on the accurate pose of the satellite.
Further, the coarse registration step of the FPFH features comprises an FPFH feature extraction step, wherein the FPFH features describe local geometric characteristics of points and are described by using a 33-dimensional feature vector; the calculation of the FPFH signature is divided into two steps:
a) defining a fixed local coordinate system, calculating a series of alpha, phi and theta characteristic values between each query point p in cloud points and a neighborhood point thereof by using the following formula, and putting the characteristic values into a histogram in a unified mode to obtain a simplified point characteristic histogram;
α=v·nt
φ=(u·(pt-ps))/||pt-ps||
θ=arctan(w·nt·u·nt);
wherein p issIs a point in the point cloud;
ptis psA neighborhood point of;
ns、ntrespectively are normal lines of corresponding points;
u, v, w are each psThree directional axes of a local coordinate system constructed for the origin;
alpha is ntThe included angle with the v axis;
phi is nsAnd (p)t-ps) The included angle of (A);
theta is ntIn plane uptThe included angle between the projection on the v and the u axis; and
b) re-determining k neighborhoods for each point in the point cloud, calculating query point p using the neighboring SPFH values using the following formulaqFPFH values of;
the K neighborhood is a set consisting of K points nearest to one point;
p is a point in the point cloud;
ωkrepresenting the query point p and its neighbors for weightkThe distance between them.
Further, the FPFH feature extraction step is followed by a feature registration step, specifically including:
a) firstly, randomly sampling the FPFH (field programmable gate flash) characteristics of target point cloud, and inquiring characteristic points corresponding to sampling points in source point cloud;
b) then resolving the pose of the point cloud between frames by adopting a least square method based on the inquired feature points and carrying out coordinate transformation;
c) then, in the transformed inter-frame point cloud, all feature matching points of the target point cloud are found by inquiring a 33-dimensional FPFH feature space of the source point cloud, and feature mismatching elimination is realized on the basis of the Euclidean distance of corresponding points, the length of a connecting line between two features and a feature point method vector; and
d) counting the number of the characteristic corresponding points after the mismatching elimination, namely the number of the internal points, judging whether the iteration times reach a termination condition, if the iteration is not terminated, updating the corresponding characteristic point pairs, and repeating the steps a and c; and if the iteration is ended, selecting the corresponding pose with the maximum number of the inner points as a final pose result.
Further, the ICP fine registration step includes:
a) carrying out coordinate transformation on the source point cloud by using an initial pose, then searching a nearest neighbor corresponding point of the transformed frame point cloud, and recording a matching set formed by corresponding points of the target point cloud p and the source point cloud q after transformation as k ═ { (p, q) };
b) the pose matrix T is solved by minimizing the point-to-face distances defined in the matching set k to the objective function e (T),where E (T) is the objective function of ICP registration, expressed as point-to-face distance; t is a pose matrix needing to be calculated by ICP registration, and the pose matrix comprises a rotation matrix and a translation matrix; n ispIs the normal vector for point p;
c) judging whether the iteration times or the distance threshold value reaches an iteration termination condition, if the iteration is not terminated, updating the initial pose by using the solved pose, and repeating the steps; and if the iteration is terminated, obtaining the accurate pose.
Further, when the key frames are screened, the matching degree, the root mean square error of the interior points and the pose variation amplitude are taken as the basis; the matching degree is used for representing the size of a two-frame point cloud overlapping area, and specifically comprisesThe number of inner points matched with the source point cloud in the target point cloud P; the root mean square error of the inner points is the root mean square error of all matched inner points; the pose variation amplitude is the motion amplitude of a sensor for representing and acquiring point cloud, and a pose matrix T is usedl,cIs measured modulo and added to the translation vector.
Further, in the loop detection and rear-end attitude optimization steps, the loop detection step is as follows:
a) when a key frame is detected, adding a vertex in the pose graph, and recording a pose matrix of the frame; adding an edge to connect the current vertex and the previous vertex, and recording a transformation pose matrix before the values of the adjacent key frames; taking a translation matrix in the frame posture matrix as a coordinate and recording;
b) when the total number of the key frames is more than 10, searching 5 key frames closest to the coordinate of the current key frame by using a k-dimensional tree;
c) if the difference quantity between the searched key frame and the current key frame is more than 10, determining that loop returning occurs; otherwise, determining that no loop exists, and repeating the steps;
d) and after the occurrence of the loop is detected, performing interframe registration on the loop frame point cloud and the current frame point cloud, connecting two corresponding vertexes by using one edge, recording the pose error between the two frames, updating and optimizing a pose graph, and repeating the steps.
Further, in the loop detection and rear-end pose optimization step, the pose optimization reduces the accumulated error of point cloud registration by using a least square optimization method on a pose graph, and averages residual errors on all key frames; in the pose graph optimization, a graph vertex is an optimization variable of a nonlinear minimum problem and is expressed as an optimized key frame pose matrix; edges connecting the vertexes are error items among the optimization variables and are expressed as interframe pose estimation errors; i. j is the vertex corresponding to the key frame, TiAnd TjThe pose matrixes corresponding to two vertexes i and j respectively, and the transformation matrix between the vertexes i and j is TijCorresponding edge error e of connected vertexijIs shown asWherein, the upper right corner mark "-1" represents the matrix inversion; the upper right angle mark "+" represents the operation of solving the unique corresponding vector of the antisymmetric matrix; and (4) finishing pose graph optimization by using a G2O optimization tool, minimizing errors of adjacent edges and loop edges, and obtaining each optimized key frame pose matrix.
Further, in the model surface reconstruction step, after the pose graph optimization is finished, combining the depth graph, the point cloud and the optimized pose graph, and performing surface reconstruction on the satellite point cloud by using the TSDF characteristics, specifically comprising the following steps:
a) equally dividing the whole modeling space into a plurality of small squares according to a certain size, wherein a TSDF value is stored in each square and represents the distance between the position and the surface of the object;
b) integrating the depth map and the key frame data of the point cloud to a TSDF volume space, updating and calculating a TSDF value, and overlapping weighted weight calculation to obtain the TSDF value;
c) the TSDF value in each block is larger than 0, which means the TSDF value is positioned outside the object, and is smaller than 0, which means the TSDF value is positioned inside the object, and is equal to 0, which means the TSDF value is positioned on the surface of the object, so the surface of the reconstructed object is extracted through the cube matching algorithm;
d) and rendering a final model of the satellite by ray tracing.
The method has the advantages that the method for reconstructing the satellite with the weak surface texture based on the point cloud carries out pose estimation by utilizing the processed point cloud, screens out the point cloud of the key frame according to the pose result, optimizes the pose, finally registers and fuses the point cloud and completes the reconstruction of the model surface, can realize the satellite three-dimensional reconstruction under the condition of poor satellite surface texture and poor illumination condition, and provides a good foundation for the identification and capture of components of the satellite.
Drawings
The technical solution and other advantages of the present application will be presented in the following detailed description of specific embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flowchart of a point cloud-based surface weak texture satellite three-dimensional reconstruction method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a point cloud-based surface weak texture satellite three-dimensional reconstruction method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of the inter-frame point cloud registration and key frame selection steps provided in the embodiment of the present application.
Fig. 4 is a flowchart of the inter-frame point cloud registration and key frame selection steps provided in the embodiment of the present application.
Fig. 5 is a fixed local coordinate system as shown in the present application.
FIG. 6 shows a point p in the present applicationqA k neighborhood influence range map of the center.
Fig. 7 is a schematic view of the pose diagram of the present application.
Fig. 8 is a schematic diagram of a keyframe trajectory change before and after pose graph optimization according to the present application.
Fig. 9 is a schematic diagram of the TSDF voxel model of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
Specifically, referring to fig. 1 and 2, an embodiment of the present application provides a method for three-dimensional reconstruction of a satellite with weak texture on a surface based on point cloud, including the following steps S1-S4.
S1, point cloud obtaining and preprocessing, namely obtaining a single-frame point cloud of the satellite and preprocessing the point cloud;
s2, inter-frame point cloud registration and key frame selection, wherein the point cloud is subjected to FPFH (field programmable gate flash) feature coarse registration and ICP (inductively coupled plasma) fine registration to obtain the accurate pose of the satellite, and then key frames are screened; the FPFH characteristic rough registration is to put point cloud characteristics into a histogram in a unified mode to obtain a rough registration pose; the ICP fine registration is to take the coarse registration pose as an initial pose and iteratively obtain a fine registration pose;
s3, performing loop detection and rear-end pose optimization, namely updating a pose graph and performing loop detection based on the selected key frames, wherein the loop detection is to find out frames which are not adjacent to but close to the current key frame in position in the historical key frames; judging whether loop appears or not, optimizing the pose graph when loop appears, and judging whether the registration of the point cloud between frames is finished or not after the pose graph is optimized and when the loop does not appear; when the registration of the point clouds between frames is not finished, returning to the point cloud obtaining and preprocessing step to obtain the point cloud of the next frame; when the registration of the point cloud between frames is finished, optimizing a pose graph; and
and S4, model surface reconstruction, wherein after the pose graph optimization is finished, the FPFH surface reconstruction is carried out to complete the satellite three-dimensional modeling.
1. Point cloud acquisition and pre-processing
According to the technical scheme, a distance measuring camera such as a Kinect is used as a sensor for obtaining three-dimensional information to obtain a color image and a depth image of a satellite to be reconstructed. In order to avoid sheltering and vision blind areas, the handheld sensor horizontally winds the satellite for a circle at first during data acquisition, then the handheld sensor is lifted for a circle around the satellite, and the omnibearing three-dimensional information of the satellite is obtained. And then, according to the camera model, calculating by utilizing the acquired color image, depth image and camera internal parameters to obtain three-dimensional point cloud data, and finishing point cloud acquisition.
For each frame of point cloud, firstly, according to the spatial position of the satellite, using spatial three-dimensional coordinate value screening to remove the background to obtain the point cloud only with the satellite; and then sequentially removing outliers and carrying out voxel filtering down-sampling on the point cloud of the satellite, and calculating a corresponding normal vector to finish the point cloud pretreatment.
2. Interframe point cloud registration and key frame selection
The invention selects key frames from the acquired data by using the inter-frame registration result, and then performs subsequent pose optimization and transformation fusion by using the key frames to realize satellite three-dimensional reconstruction. A schematic diagram of the inter-frame point cloud registration and key frame selection steps is shown in fig. 3.
The registration result is: and performing interframe registration on the previous key frame point cloud and the current frame point cloud to obtain a transformation pose. The specific registration method comprises the following steps: and (3) combining the FPFH (field programmable gate flash) feature coarse registration with the point-to-surface ICP (inductively coupled plasma) fine registration, namely performing fine registration by taking a pose transformation matrix obtained by the coarse registration calculation as the input of an ICP (inductively coupled plasma) algorithm to obtain the final inter-frame transformation pose. Pl: last key frame point cloud; pc: current key frame point cloud; t is(l,c): a transformation matrix (rotation + translation matrix) between two frames of point clouds (l, c); FPFH: a Fast Point Feature Histogram (FPFH), a popular term for a Feature that represents a three-dimensional Point; ICP: iterative Closest Point (ICP), a registration algorithm; t isr: and (4) roughly registering the pose transformation matrix (rotation + translation matrix) obtained by calculation.
Specifically, referring to fig. 4, the inter-frame point cloud registration and key frame selection step S2 includes:
s21, point cloud data processing, namely defining a fixed local coordinate system, wherein a single frame of point cloud obtained first is a target point cloud, and a point cloud obtained later is a source point cloud;
s22, performing rough registration on the FPFH (field programmable gate flash) features, namely extracting the FPFH features of the source point cloud and the target point cloud from the local coordinate system, matching and adjusting the pose of the FPFH features of the source point cloud, combining the FPFH features of the source point cloud with the FPFH features of the target point cloud to form corresponding feature point pairs, and after the corresponding feature point pairs are updated in an iterative manner, overlapping the positions of the corresponding feature point pairs to form a point fast feature histogram of the satellite as a final pose of the rough registration;
s23, ICP fine registration, wherein in the local coordinate system, the final rough registration pose is used as an initial pose, a satellite edge frame in the initial pose is identified as a historical key frame, the plane where the historical key frame is located is used as a reference surface to perform coordinate transformation on the source point cloud, and the accurate pose of the satellite is formed as a fine registration pose after the preset iteration times are calculated in an iterative mode or the distance between the point of the source point cloud and the reference surface is smaller than a distance threshold value; and
and S24, a key frame screening step, namely screening a satellite edge frame as a current key frame based on the accurate pose of the satellite.
2.1FPFH feature coarse registration
The FPFH feature coarse registration step S22 includes an FPFH feature extraction step and a feature registration step.
1) Extraction of FPFH features
The FPFH features describe the local geometric characteristics of the points, using a 33-dimensional feature vector. The calculation of the FPFH signature is divided into two steps:
a) defining a fixed local coordinate system, as shown in fig. 5, then calculating a series of alpha, phi, theta characteristic values between each query point p in the cloud point and its neighborhood points by using the following formula, and putting the characteristic values into the histogram in a uniform manner to obtain a simplified point characteristic histogram;
α=v·nt
φ=(u·(pt-ps))/||pt-ps||
θ=arctan(w·nt·u·nt);
wherein p issIs a point in the point cloud;
ptis psA neighborhood point of;
ns、ntrespectively are normal lines of corresponding points;
u, v, w are each psThree directional axes of a local coordinate system constructed for the origin;
alpha is ntAngle with v-axis;
Phi is nsAnd (p)t-ps) The included angle of (A);
theta is ntIn plane uptThe included angle between the projection on the v and the u axis; and
b) re-determining k neighborhoods for each point in the point cloud, calculating query point p using the neighboring SPFH values using the following formulaqFPFH values of;
the K neighborhood is a set consisting of K points nearest to one point;
p is a point in the point cloud;
ωkrepresenting the query point p and its neighbors for weightkThe distance between them.
The importance of this weighting scheme can be appreciated as shown in fig. 6, which represents the impact range of a point-centered k-neighborhood. During specific calculation, 11 statistical subintervals are used for realizing the FPFH (floating point frequency hopping), namely, a parameter interval of each characteristic value is divided into 11 subintervals, a characteristic histogram (Feature Histgram) is calculated respectively and then merged to obtain a 33-dimensional characteristic vector of which the element is a floating point value, and the vector is the obtained FPFH characteristic.
2) Random sample consensus (RANSAC) -based feature registration
The feature registration step is feature registration based on random sampling consistency, and specifically comprises the following steps:
a) firstly, randomly sampling the FPFH (field programmable gate flash) characteristics of target point cloud, and inquiring characteristic points corresponding to sampling points in source point cloud;
b) then resolving the pose of the point cloud between frames by adopting a least square method based on the inquired feature points and carrying out coordinate transformation;
c) then, in the transformed inter-frame point cloud, all feature matching points of the target point cloud are found by inquiring a 33-dimensional FPFH feature space of the source point cloud, and feature mismatching elimination is realized on the basis of the Euclidean distance of corresponding points, the length of a connecting line between two features and a feature point method vector; and
d) counting the number of the characteristic corresponding points after the mismatching elimination, namely the number of the internal points, judging whether the iteration times reach a termination condition, if the iteration is not terminated, updating the corresponding characteristic point pairs, and repeating the steps a and c; and if the iteration is ended, selecting the corresponding pose with the maximum number of the inner points as a final pose result.
2.2 Point-to-surface ICP Fine registration
The ICP fine registration step S23 is point-to-plane ICP fine registration. The point-to-surface ICP configuration algorithm takes the pose obtained by coarse registration as an initial pose, and iterates to obtain a fine registration pose, wherein the iteration is carried out by the following steps:
a) performing coordinate transformation on the source point cloud by using the initial pose, and then searching a nearest neighbor corresponding point of the transformed inter-frame point cloud, namely a target point cloud p (p in FIG. 6)q) A matching set formed by corresponding points of the point cloud Tq after the source point cloud q is transformed is recorded as k { (p, q) };
b) the pose matrix T is solved by minimizing the point-to-face distances defined in the matching set k to the objective function e (T),where E (T) is the objective function of ICP registration, expressed as point-to-face distance; t is a pose matrix needing to be calculated by ICP registration, and the pose matrix comprises a rotation matrix and a translation matrix; n ispIs the normal vector for point p;
c) judging whether the iteration times or the distance threshold value reaches an iteration termination condition, if the iteration is not terminated, updating the initial pose by using the solved pose, and repeating the steps; and if the iteration is terminated, obtaining the accurate pose.
2.3 screening keyframes
In the key frame screening step S24, the matching degree, the root mean square error of the interior point, and the pose variation range are used as references in the key frame screening.
The matching degree is as follows: the precise registration algorithm is used for representing the size of a two-frame point cloud overlapping area, specifically the number of matched inner points in the target point cloud P, and the higher the matching degree is, the better the point cloud registration effect is.
The inner point root mean square error is: and representing the root mean square error of all the matched interior points, wherein the smaller the root mean square error of the interior points is, the better the point cloud registration effect is.
The pose change range is as follows: representing the motion amplitude of the interframe sensor by using a pose matrix Tl,cIs measured modulo the translation vector and added, with a larger value indicating a larger magnitude of motion.
Norm(T)=|min(||Trot||,2π-||Trot||)|+||Ttrans||
Wherein, Trot: transforming a rotation matrix (3X3) of the matrices (4X 4); t istrans: a translation matrix (3X1) of the transformation matrix (4X 4); i TrotL |: a two-norm representing a rotation matrix; i TtransL |: a two-norm representing a translation matrix; the norm of a matrix can be simply understood as a measure of the matrix; min: taking a small function; l |: and taking an absolute value.
And if the matching degree is greater than the threshold, the root mean square error of the interior point is lower than the threshold and the pose change amplitude is moderate, adding the frame into the key frame for subsequent three-dimensional reconstruction.
3. Loop detection and rear-end attitude optimization
In the process of continuously calculating the pose by multi-frame registration, the next calculation depends on the last registration result, an accumulative error is inevitably generated, and the precision of the three-dimensional reconstruction model is reduced by the large accumulative error along with the increase of the number of key frames of the frames. The invention uses loop detection and pose graph optimization to reduce the influence of accumulated errors during reconstruction.
3.1 Loop-back detection and pose graph construction
The loop detection is as follows: frames that are not adjacent but close in position to the current key frame are found in the historical key frames. The pose graph is as follows: the graph model in the graph theory is used for representing the nonlinear least square problem and consists of a plurality of vertexes and edges connected with the vertexes. As shown in fig. 7, is a pose diagram. Continuously updating and constructing a pose graph during reconstruction and performing loop detection, wherein the loop detection comprises the following steps:
a) when a key frame is detected, adding a vertex in the pose graph, and recording a pose matrix of the frame; adding an edge to connect the current vertex and the previous vertex, and recording a transformation pose matrix before the values of the adjacent key frames; taking a translation matrix in the frame posture matrix as a coordinate and recording;
b) when the total number of the key frames is more than 10, searching 5 key frames closest to the coordinate of the current key frame by using a k-dimensional Tree (KD-Tree); KD-Tree is an abbreviation for k-dimensional Tree, a Tree-like data structure that stores instance points in k-dimensional space for fast retrieval.
c) If the difference quantity between the searched key frame and the current key frame is more than 10, determining that loop returning occurs; otherwise, determining that no loop exists, and repeating the steps;
d) and after the occurrence of the loop is detected, performing interframe registration on the loop frame point cloud and the current frame point cloud, connecting two corresponding vertexes by using one edge, recording the pose error between the two frames, updating and optimizing a pose graph, and repeating the steps.
3.2. Pose graph optimization
In the loop detection and rear-end pose optimization step S3, the pose optimization reduces the accumulated error of point cloud registration by using a least square optimization method on the pose graph, and averages the residual errors to all key frames; in the pose graph optimization, a graph vertex is an optimization variable of a nonlinear minimum problem and is expressed as an optimized key frame pose matrix; edges connecting the vertexes are error items among the optimization variables and are expressed as interframe pose estimation errors; i. j is the vertex corresponding to the key frame, TiAnd TjThe pose matrixes corresponding to two vertexes i and j respectively, and the transformation matrix between the vertexes i and j is TijCorresponding edge error e of connected vertexijIs shown asWherein, the upper right corner mark "-1" represents the matrix inversion; the upper right angle mark "+" represents the operation of finding the vector uniquely corresponding to the antisymmetric matrix.
And (4) finishing pose graph optimization by using a G2O optimization tool, minimizing errors of adjacent edges and loop edges, and obtaining each optimized key frame pose matrix.
In order to ensure that no large deformation is generated due to accumulated errors in the registration process, the pose graph is optimized once after loop returning is detected each time, the pose graph is optimized once after all inter-frame registration is finished, the optimized pose results are updated to all key frames, and the track changes of the key frames before and after the pose graph is optimized are shown in fig. 8.
4 model surface reconstruction
Tsdf (truncated designed distance function) is a surface reconstruction algorithm that uses structured point cloud data and parametrically expresses a surface. The method is characterized in that point cloud data are mapped into a predefined three-dimensional space, a truncated symbolic distance function is used for representing an area near the surface of a real scene, and a surface model is built. Fig. 9 is a schematic diagram of a TSDF voxel model.
In the model surface reconstruction step S4, after the pose graph is optimized, the depth map, the point cloud, and the optimized pose graph are combined, and the TSDF feature is used to perform surface reconstruction on the satellite point cloud, which specifically includes the following steps:
a) equally dividing the whole modeling space into a plurality of small squares according to a certain size, wherein a TSDF value is stored in each square and represents the distance between the position and the surface of the object;
b) integrating the depth map and the key frame data of the point cloud to a TSDF volume space, updating and calculating a TSDF value, and overlapping weighted weight calculation to obtain the TSDF value;
c) the TSDF value in each block is larger than 0, which means the TSDF value is positioned outside the object, and is smaller than 0, which means the TSDF value is positioned inside the object, and is equal to 0, which means the TSDF value is positioned on the surface of the object, so the surface of the reconstructed object is extracted through the cube matching algorithm;
d) and rendering a final model of the satellite by ray tracing.
The method has the advantages that the method for reconstructing the satellite with the weak surface texture based on the point cloud carries out pose estimation by utilizing the processed point cloud, screens out the point cloud of the key frame according to the pose result, optimizes the pose, finally registers and fuses the point cloud and completes the reconstruction of the model surface, can realize the satellite three-dimensional reconstruction under the condition of poor satellite surface texture and poor illumination condition, and provides a good foundation for the identification and capture of components of the satellite.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The foregoing describes in detail an electronic device provided in an embodiment of the present application, and a specific example is applied to illustrate the principle and the implementation of the present application, and the description of the foregoing embodiment is only used to help understanding the technical solution and the core idea of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.
Claims (10)
1. A point cloud-based surface weak texture satellite three-dimensional reconstruction method is characterized by comprising the following steps:
point cloud obtaining and preprocessing, namely obtaining a single-frame point cloud of a satellite and preprocessing the point cloud;
performing FPFH (field programmable gate flash) feature coarse registration and ICP (inductively coupled plasma) fine registration on point clouds to acquire the accurate pose of the satellite, and then screening key frames; the FPFH characteristic rough registration is to put point cloud characteristics into a histogram in a unified mode to obtain a rough registration pose; the ICP fine registration is to take the coarse registration pose as an initial pose and iteratively obtain a fine registration pose;
performing pose graph updating and loop detection based on the selected key frame, wherein the loop detection is to find out a frame which is not adjacent to but close to the current key frame in the position from the historical key frame; judging whether loop appears or not, optimizing the pose graph when loop appears, and judging whether the registration of the point cloud between frames is finished or not after the pose graph is optimized and when the loop does not appear; when the registration of the point clouds between frames is not finished, returning to the point cloud obtaining and preprocessing step to obtain the point cloud of the next frame; when the registration of the point cloud between frames is finished, optimizing a pose graph; and
and (3) a model surface reconstruction step, namely after the pose graph is optimized, performing FPFH surface reconstruction to complete satellite three-dimensional modeling.
2. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 1, wherein the point cloud preprocessing step in the point cloud acquisition and preprocessing step comprises:
for each frame of point cloud, firstly, according to the spatial position of the satellite, using spatial three-dimensional coordinate value screening to remove the background to obtain the point cloud only with the satellite; and then sequentially removing outliers and carrying out voxel filtering down-sampling on the point cloud of the satellite, and calculating a corresponding normal vector to finish the point cloud pretreatment.
3. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 1, wherein the inter-frame point cloud registration and key frame selection step comprises:
a point cloud data processing step, wherein a fixed local coordinate system is defined, a single-frame point cloud obtained firstly is a target point cloud, and a point cloud obtained later is a source point cloud;
the method comprises the following steps of performing rough registration on FPFH (field programmable gate flash) characteristics, namely extracting the FPFH characteristics of source point clouds and target point clouds in a local coordinate system, matching and adjusting the pose of the FPFH characteristics of the source point clouds, combining the FPFH characteristics of the source point clouds with the FPFH characteristics of the target point clouds to form corresponding characteristic point pairs, and after the corresponding characteristic point pairs are updated in an iterative mode, overlapping the positions of the corresponding characteristic point pairs to form a point fast characteristic histogram of a satellite as a final pose of the rough registration;
an ICP fine registration step, wherein in the local coordinate system, the final pose of the coarse registration is used as an initial pose, a satellite edge frame in the initial pose is identified as a historical key frame, a plane where the historical key frame is located is used as a reference plane to carry out coordinate transformation on the source point cloud, and the accurate pose of the satellite is formed as a fine registration pose after the preset iteration times are iteratively calculated or the distance between the point of the source point cloud and the reference plane is smaller than a distance threshold; and
and a key frame screening step, wherein the satellite edge frame is screened as the current key frame based on the accurate pose of the satellite.
4. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 3, wherein the FPFH feature coarse registration step comprises an FPFH feature extraction step, wherein the FPFH feature describes local geometric characteristics of points and is described by using a 33-dimensional feature vector; the calculation of the FPFH signature is divided into two steps:
a) defining a fixed local coordinate system, calculating a series of alpha, phi and theta characteristic values between each query point p in cloud points and a neighborhood point thereof by using the following formula, and putting the characteristic values into a histogram in a unified mode to obtain a simplified point characteristic histogram;
α=v·nt
φ=(u·(pt-ps))/||pt-ps||
θ=arctan(w·nt·u·nt);
wherein p issIs a point in the point cloud;
ptis psA neighborhood point of;
ns、ntrespectively are normal lines of corresponding points;
u, v, w are each psThree directional axes of a local coordinate system constructed for the origin;
alpha is ntThe included angle with the v axis;
phi is nsAnd (p)t-ps) The included angle of (A);
theta is ntIn plane uptThe included angle between the projection on the v and the u axis; and
b) re-determining k neighborhoods for each point in the point cloud, calculating query point p using the neighboring SPFH values using the following formulaqFPFH values of;
the K neighborhood is a set consisting of K points nearest to one point;
p is a point in the point cloud;
ωkrepresenting the query point p and its neighbors for weightkThe distance between them.
5. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 4, further comprising a feature registration step after the FPFH feature extraction step, wherein the feature registration based on random sampling consistency specifically comprises:
a) firstly, randomly sampling the FPFH (field programmable gate flash) characteristics of target point cloud, and inquiring characteristic points corresponding to sampling points in source point cloud;
b) then resolving the pose of the point cloud between frames by adopting a least square method based on the inquired feature points and carrying out coordinate transformation;
c) then, in the transformed inter-frame point cloud, all feature matching points of the target point cloud are found by inquiring a 33-dimensional FPFH feature space of the source point cloud, and feature mismatching elimination is realized on the basis of the Euclidean distance of corresponding points, the length of a connecting line between two features and a feature point method vector; and
d) counting the number of the characteristic corresponding points after the mismatching elimination, namely the number of the internal points, judging whether the iteration times reach a termination condition, if the iteration is not terminated, updating the corresponding characteristic point pairs, and repeating the steps a and c; and if the iteration is ended, selecting the corresponding pose with the maximum number of the inner points as a final pose result.
6. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 3, wherein the ICP fine registration step comprises:
a) carrying out coordinate transformation on the source point cloud by using an initial pose, then searching a nearest neighbor corresponding point of the transformed frame point cloud, and recording a matching set formed by corresponding points of the target point cloud p and the source point cloud q after transformation as k ═ { (p, q) };
b) the pose matrix T is solved by minimizing the point-to-face distances defined in the matching set k to the objective function e (T),where E (T) is the objective function of ICP registration, expressed as point-to-face distance; t is a pose matrix needing to be calculated by ICP registration, and the pose matrix comprises a rotation matrix and a translation matrix; n ispIs the normal vector for point p;
c) judging whether the iteration times or the distance threshold value reaches an iteration termination condition, if the iteration is not terminated, updating the initial pose by using the solved pose, and repeating the steps; and if the iteration is terminated, obtaining the accurate pose.
7. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 1, wherein in the key frame screening, the matching degree, the root mean square error of the inner points and the pose change amplitude are taken as the basis;
the matching degree is used for representing the size of a two-frame point cloud overlapping area, and specifically is the number of inner points matched with the source point cloud in the target point cloud P;
the root mean square error of the inner points is the root mean square error of all matched inner points;
the pose variation amplitude is the motion amplitude of a sensor for representing and acquiring point cloud, and a pose matrix T is usedl,cIs measured modulo and added to the translation vector.
8. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 1, wherein in the loop detection and back-end pose optimization steps, the loop detection steps are as follows:
a) when a key frame is detected, adding a vertex in the pose graph, and recording a pose matrix of the frame; adding an edge to connect the current vertex and the previous vertex, and recording a transformation pose matrix before the values of the adjacent key frames; taking a translation matrix in the frame posture matrix as a coordinate and recording;
b) when the total number of the key frames is more than 10, searching 5 key frames closest to the coordinate of the current key frame by using a k-dimensional tree;
c) if the difference quantity between the searched key frame and the current key frame is more than 10, determining that loop returning occurs; otherwise, determining that no loop exists, and repeating the steps;
d) and after the occurrence of the loop is detected, performing interframe registration on the loop frame point cloud and the current frame point cloud, connecting two corresponding vertexes by using one edge, recording the pose error between the two frames, updating and optimizing a pose graph, and repeating the steps.
9. The point cloud based surface weak texture satellite three-dimensional reconstruction method according to claim 1, characterized in that in the loop detection and back-end pose optimization step, the pose optimization reduces the accumulated error of point cloud registration by using least square optimization method on the pose graph, and averages the residual error onto all key frames; in the pose graph optimization, a graph vertex is an optimization variable of a nonlinear minimum problem and is expressed as an optimized key frame pose matrix; edges connecting the vertexes are error items among the optimization variables and are expressed as interframe pose estimation errors; i. j is the vertex corresponding to the key frame, TiAnd TjThe pose matrixes corresponding to two vertexes i and j respectively, and the transformation matrix between the vertexes i and j is TijCorresponding edge error e of connected vertexijIs shown asWherein, the upper right corner mark "-1" represents the matrix inversion; the upper right angle mark "+" represents the operation of solving the unique corresponding vector of the antisymmetric matrix; and (4) finishing pose graph optimization by using a G2O optimization tool, minimizing errors of adjacent edges and loop edges, and obtaining each optimized key frame pose matrix.
10. The point cloud-based surface weak texture satellite three-dimensional reconstruction method according to claim 1, wherein in the model surface reconstruction step, after the pose graph is optimized, the depth map, the point cloud and the optimized pose graph are combined, and the TSDF feature is used to reconstruct the surface of the satellite point cloud, and the method specifically comprises the following steps:
a) equally dividing the whole modeling space into a plurality of small squares according to a certain size, wherein a TSDF value is stored in each square and represents the distance between the position and the surface of the object;
b) integrating the depth map and the key frame data of the point cloud to a TSDF volume space, updating and calculating a TSDF value, and overlapping weighted weight calculation to obtain the TSDF value;
c) the TSDF value in each block is larger than 0, which means the TSDF value is positioned outside the object, and is smaller than 0, which means the TSDF value is positioned inside the object, and is equal to 0, which means the TSDF value is positioned on the surface of the object, so the surface of the reconstructed object is extracted through the cube matching algorithm;
d) and rendering a final model of the satellite by ray tracing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110874322.9A CN113706591B (en) | 2021-07-30 | 2021-07-30 | Point cloud-based three-dimensional reconstruction method for surface weak texture satellite |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110874322.9A CN113706591B (en) | 2021-07-30 | 2021-07-30 | Point cloud-based three-dimensional reconstruction method for surface weak texture satellite |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113706591A true CN113706591A (en) | 2021-11-26 |
CN113706591B CN113706591B (en) | 2024-03-19 |
Family
ID=78651042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110874322.9A Active CN113706591B (en) | 2021-07-30 | 2021-07-30 | Point cloud-based three-dimensional reconstruction method for surface weak texture satellite |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113706591B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115880690A (en) * | 2022-11-23 | 2023-03-31 | 郑州大学 | Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction |
CN115951589A (en) * | 2023-03-15 | 2023-04-11 | 中科院南京天文仪器有限公司 | Star uniform selection method based on maximized Kozachenko-Leonenko entropy |
CN117829381A (en) * | 2024-03-05 | 2024-04-05 | 成都农业科技职业学院 | Agricultural greenhouse data optimization acquisition system based on Internet of things |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110930495A (en) * | 2019-11-22 | 2020-03-27 | 哈尔滨工业大学(深圳) | Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium |
WO2021088481A1 (en) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection |
CN112907491A (en) * | 2021-03-18 | 2021-06-04 | 中煤科工集团上海有限公司 | Laser point cloud loopback detection method and system suitable for underground roadway |
-
2021
- 2021-07-30 CN CN202110874322.9A patent/CN113706591B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021088481A1 (en) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection |
CN110930495A (en) * | 2019-11-22 | 2020-03-27 | 哈尔滨工业大学(深圳) | Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium |
CN112907491A (en) * | 2021-03-18 | 2021-06-04 | 中煤科工集团上海有限公司 | Laser point cloud loopback detection method and system suitable for underground roadway |
Non-Patent Citations (2)
Title |
---|
张健;李新乐;宋莹;王仁;朱凡;赵晓燕;: "基于噪声点云的三维场景重建方法", 计算机工程与设计, no. 04 * |
李宜鹏;解永春;: "基于点云位姿平均的非合作目标三维重构", 空间控制技术与应用, no. 01 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115880690A (en) * | 2022-11-23 | 2023-03-31 | 郑州大学 | Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction |
CN115880690B (en) * | 2022-11-23 | 2023-08-11 | 郑州大学 | Method for quickly labeling objects in point cloud under assistance of three-dimensional reconstruction |
CN115951589A (en) * | 2023-03-15 | 2023-04-11 | 中科院南京天文仪器有限公司 | Star uniform selection method based on maximized Kozachenko-Leonenko entropy |
CN117829381A (en) * | 2024-03-05 | 2024-04-05 | 成都农业科技职业学院 | Agricultural greenhouse data optimization acquisition system based on Internet of things |
CN117829381B (en) * | 2024-03-05 | 2024-05-14 | 成都农业科技职业学院 | Agricultural greenhouse data optimization acquisition system based on Internet of things |
Also Published As
Publication number | Publication date |
---|---|
CN113706591B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103426182B (en) | The electronic image stabilization method of view-based access control model attention mechanism | |
EP3695384B1 (en) | Point cloud meshing method, apparatus, device and computer storage media | |
EP2677464B1 (en) | Feature detection in numeric data | |
CN113706591A (en) | Point cloud-based surface weak texture satellite three-dimensional reconstruction method | |
KR101195942B1 (en) | Camera calibration method and 3D object reconstruction method using the same | |
CN104574347B (en) | Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data | |
CN109544456A (en) | The panorama environment perception method merged based on two dimensional image and three dimensional point cloud | |
CN112686935A (en) | Airborne depth sounding radar and multispectral satellite image registration method based on feature fusion | |
CN109559273B (en) | Quick splicing method for vehicle bottom images | |
CN113838191A (en) | Three-dimensional reconstruction method based on attention mechanism and monocular multi-view | |
CN102521816A (en) | Real-time wide-scene monitoring synthesis method for cloud data center room | |
CN108550166B (en) | Spatial target image matching method | |
CN113674400A (en) | Spectrum three-dimensional reconstruction method and system based on repositioning technology and storage medium | |
Hu et al. | Texture-aware dense image matching using ternary census transform | |
CN105701770B (en) | A kind of human face super-resolution processing method and system based on context linear model | |
CN116664892A (en) | Multi-temporal remote sensing image registration method based on cross attention and deformable convolution | |
CN111783834B (en) | Heterogeneous image matching method based on joint graph spectrum feature analysis | |
CN113111741A (en) | Assembly state identification method based on three-dimensional feature points | |
CN112465984A (en) | Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering | |
CN111127353A (en) | High-dynamic image ghost removing method based on block registration and matching | |
CN114581331A (en) | Point cloud noise reduction method and device suitable for multiple scenes | |
CN114612412A (en) | Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium | |
CN110969650B (en) | Intensity image and texture sequence registration method based on central projection | |
CN117541537B (en) | Space-time difference detection method and system based on all-scenic-spot cloud fusion technology | |
CN117671159A (en) | Three-dimensional model generation method and device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |