CN114648584A - Robustness control method and system for multi-source fusion positioning - Google Patents
Robustness control method and system for multi-source fusion positioning Download PDFInfo
- Publication number
- CN114648584A CN114648584A CN202210562354.XA CN202210562354A CN114648584A CN 114648584 A CN114648584 A CN 114648584A CN 202210562354 A CN202210562354 A CN 202210562354A CN 114648584 A CN114648584 A CN 114648584A
- Authority
- CN
- China
- Prior art keywords
- error
- point
- representing
- points
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 89
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000000007 visual effect Effects 0.000 claims abstract description 72
- 238000000605 extraction Methods 0.000 claims abstract description 33
- 230000007613 environmental effect Effects 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims description 27
- 239000011159 matrix material Substances 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 21
- 239000013598 vector Substances 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 11
- 238000013519 translation Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 230000004888 barrier function Effects 0.000 claims description 4
- 230000001133 acceleration Effects 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 3
- 230000010355 oscillation Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013336 robust study Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Navigation (AREA)
Abstract
The invention provides a robustness control method and a system for multi-source fusion positioning, wherein the method comprises the following steps: extracting visual feature points of the same frame of image data of the same scene, obtaining a first position posture through visual inertial navigation fusion, and calculating a visual feature point error and a first reprojection error according to the first position posture; performing inertial navigation and laser radar feature point extraction on point cloud data of the same frame in the same scene, and obtaining a second attitude through inertial navigation laser radar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose; calculating a third pose of the laser radar and the camera, and calculating a camera position feature point error and a third reprojection error according to the third pose; and (4) comparing the reprojection errors, and performing weight ranking on the characteristic point errors respectively to perform quality grade division. Based on the method, a robustness control system is also provided. The method can improve the resolving speed of the system, improve the environmental adaptability and the accuracy of the system and improve the robustness of the system.
Description
Technical Field
The invention belongs to the technical field of multi-source fusion positioning of unmanned vehicles, and particularly relates to a robustness control method and system for multi-source fusion positioning.
Background
The multi-source fusion positioning technology becomes a mainstream technology of unmanned vehicle position navigation, and effectively overcomes the defects of single sensor error accumulation, poor stability and the like on the unmanned vehicle. The multi-source fusion positioning technology fuses isomorphic or heterogeneous navigation information from different navigation sources according to a corresponding fusion algorithm based on an information fusion technology, so that an optimal fusion result is obtained. Compared with the traditional single navigation source, the multi-source fusion navigation can fully utilize the advantages of each navigation source, thereby providing the best unmanned vehicle positioning and navigation service. Therefore, the robustness research of the multi-source fusion positioning system is necessary, which is helpful for the long-term development of the multi-source fusion positioning technology.
Prior art robustness studies on multi-source fusion localization techniques have disclosed a number of approaches. The first scheme provides a multisource sensor information fusion navigation system based on inertial navigation, odometer and ultra wide band, a factor graph model is used for establishing a fusion framework, each sensor information is abstracted into factors, and an incremental smooth optimization algorithm (ISAM 2) based on a Bayesian tree structure is adopted for processing and fusing the sensor information. The second scheme provides an indoor positioning method for fusing an IMU (inertial measurement Unit) and UWB (ultra Wide band) information by adopting an Unscented Kalman Filtering (UKF) technology, so that error accumulation of the IMU can be effectively inhibited, and the solving precision of a nonlinear observation equation is effectively improved. A third approach uses extended kalman filtering to fuse UWB (high precision positioning unit), IMU, vision sensors, which allows the use of low cost Inertial Measurement Unit (IMU) in the prediction step, integrating the vision ranging method into marker detection to a precision of 10 cm.
In summary, the common methods for information fusion of multiple sensors in the prior art can be basically summarized into four categories, namely probability estimation algorithm, recognition algorithm, parameter algorithm and artificial intelligence algorithm. Each large class of algorithms is composed of a plurality of different typical methods, the essential idea of multi-sensor information fusion is weighted average, and different weights are given to data acquired by different sensors according to accuracy. Many traditional classical filtering algorithm ideas originate from Bayesian filtering, and model algorithms such as particle filtering, Kalman filtering, hidden Markov algorithm, dynamic Bayesian network and the like are practical representations in filtering algorithms. The core of the Bayesian algorithm is to calculate corresponding prior probability distribution and information fusion characteristic values according to known sensor input data, wherein the characteristic values describe the whole environment characteristics provided by a prior model. However, the problems of asynchronous sensor information sampling time and the like exist in the navigation system at present. And navigation systems in the prior art generally integrate multiple sensors. Different sensors can be influenced by different working environments, and the problems of time delay and the like also exist in the transmission process of sensor information. In the prior art, extrapolation, interpolation, curve fitting and other methods are mostly adopted in the information fusion process. However, these methods cannot guarantee high synchronization accuracy. Different working environments and tasks also have requirements on selection of the sensors, and in addition, the combination mode of the sensors is related to the operating environment and the tasks, so that the problems of insufficient information fusion, insufficient exertion of the characteristic advantages of the sensors and the like exist. Therefore, the traditional data fusion method lacks certain self-adaptive capacity and is difficult to enable the unmanned vehicle to achieve satisfactory performance in a complex environment.
Disclosure of Invention
In order to solve the technical problem, the invention provides a robustness control method and system for multi-source fusion positioning. The resolving speed of the unmanned vehicle system can be increased, the environmental adaptability and the accuracy of the unmanned vehicle system are improved, the robustness of the unmanned vehicle system is improved, and wrong points or points with larger errors are reduced to participate in resolving.
In order to achieve the purpose, the invention adopts the following technical scheme:
a robustness control method for multi-source fusion positioning comprises the following steps:
extracting visual characteristic points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual characteristic point error and a first reprojection error according to the first position;
performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points under world coordinates, and obtaining a second pose through inertial navigation laser radar fusion; calculating a point cloud feature point error and a second reprojection error according to the second pose;
calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
and comparing the first reprojection error, the second reprojection error and the third reprojection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
Further, before the feature point extraction, the method further comprises: initializing a combined navigation system;
the process of initializing the integrated navigation system comprises the following steps: the angular velocity of the navigation coordinate system relative to the earth coordinate system caused by the carrier velocity is not considered during attitude updating, and on the premise that the velocity of the navigation coordinate system caused by the carrier velocity is not considered during velocity updating, the attitude updating formula during open-loop initial alignment is as follows:
the formula for the speed update during the open loop initial alignment is:
wherein:representing the attitude matrix of the carrier coordinate system under the navigation coordinate system,representing the angular velocity of the carrier coordinate system relative to the earth's center inertial coordinate system,a rotation angular velocity representing a navigation system caused by the rotation of the earth;representing a projection of the specific force in a navigation coordinate system;represents the acceleration of gravity;
the error equation of the integrated navigation system is as follows:
wherein:the error of the attitude matrix is represented by,representing the angular velocity of the navigation system relative to the inertial system,the null shift error of the gyroscope is represented,the speed error is indicated in the form of a speed error,representing the projection of the specific force in the navigation coordinate system,representing the specific force value measured by the accelerometer,the accelerometer is constantly biased.
Further, the image data feature extraction process includes:
extracting pixel points with obvious difference in image data by adopting an improved FAST corner, wherein the pixel points with obvious difference are pixels with gray value difference larger than a set threshold value from adjacent pixels around;
and (3) improving the rotation invariance of the feature points by adopting a gray centroid algorithm, and determining the position of the ORB feature points in the image data by judging the characteristics of the pixel points through a BRIEF descriptor.
Further, the detailed process of the visual inertial navigation fusion comprises:
all state vectors within the sliding window are defined:
wherein,representing a rotation matrix between the ith frame of inertial navigation and a world coordinate system;indicating a translation deviation;representing a velocity vector of inertial navigation in a world coordinate system;representing an accelerometer bias;indicating a rotational deviation; n, m and o respectively represent indexes of IMU state vectors, road sign points and road sign lines in the sliding window; n, M and O respectively represent the number of key frames in the sliding window, and the number of road mark points and road mark lines observed by all key frames in the sliding window;representing the inverse depth at which the kth landmark point is first observed,an orthogonal expression representing the ith characteristic line;
the sliding window based nonlinear optimization is done by minimizing the residuals of all state quantities within the sliding window, where the objective function is expressed as:
wherein,the prior information of the frames which are removed due to long time or less common vision characteristics in the sliding window is obtained;represents the residual;representing a Jacobian matrix;is all state vectors within the sliding window;a penalty item used in an iterative constraint process for a Cauchy robust function;
pre-integration of inertial navigation between the state quantities of the ith frame and the (i + 1) th frame,representing the error between inertial navigation frames;representing inertial navigation frame observations;
representing the reprojection error of the visual feature points,representing a normalized planeAn observed value when the error of the visual feature point re-projection is calculated;
the reprojection error of the visual characteristic line;representing a normalized planeReprojection error on visual characteristic lineAn observed value at the time of calculation;
to avoid oscillation caused by the growth of the iteration step, starting from the initial value, by increasing a minute amountAnd (4) iteratively updating until the requirements are met:
wherein,is an error vectorAboutThe jacobian matrix of (a) is,prior error before and after iteration respectively,respectively inertial navigation pre-integral errors before and after iteration,the visual characteristic point reprojection errors before and after iteration are respectively,the visual characteristic line reprojection errors before and after iteration are respectively.
Further, the inertial navigation and lidar characteristic point extraction process comprises the following steps:
carrying out down-sampling on the ground point cloud of each frame by using a voxel grid filtering method, and marking the down-sampled ground point cloud as a ground characteristic point;
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness;
setting a smoothness threshold value, and taking points with smoothness larger than the smoothness threshold value as line characteristic points; points whose smoothness is less than the threshold are regarded as surface feature points.
Further, the inertial navigation lidar fusion process comprises the following steps:
selecting a key frame and including a fixed number of sub-key frames by sliding window(ii) a Transformation of sub-keyframes under a world coordinate systemFinally merged into a voxel mapPerforming the following steps;
is the ith frame edge feature expressed under the world coordinate system;is the i-1 frame edge characteristic expressed under the world coordinate system;is the i-n frame edge characteristic expressed under the world coordinate system;
is the ith frame face characteristic expressed under the world coordinate system;is the i-1 frame face characteristic expressed under the world coordinate system;is the ith-nth frame face characteristic expressed under the world coordinate system;
matching the edge characteristics and the plane characteristics, and establishing a pose solving equation by using the distance relation between the line and the point surface:
whereinRepresenting the edge coordinates of the k point at the i +1 th moment;representing the edge coordinates of the u point at the ith moment;representing the edge coordinates of the v point at the ith moment;
representing the coordinates of a u point surface at the ith moment;representing the coordinates of a point surface at the ith moment v;representing the coordinates of a k point surface at the i +1 th moment;representing the coordinates of a point and a surface at the ith moment w;
the non-linear problem in scan matching is represented as an iteratively solved linear problem:
wherein,for the transformation matrix of the key frame under the world coordinate system, A and b are represented by linearized posesObtained, a denotes rotation and b denotes translation.
Further, the process of extracting the feature points of the camera and the lidar includes:
in three-dimensional space there are n spatial points P, the coordinates of which areI =1,2,3,4, …, n, the coordinates projected onto the pixel coordinate system being(ii) a The camera spatial point and pixel position relationship can be found:
wherein,representing the scale information of the image pixel points, the K matrix is represented as the internal parameter of the camera,rotation and translation matrices represented as lie groups for the camera;
and (3) establishing an error term equation by a least square method to estimate the pose of the camera:
formula (23) is represented byWhen taking the minimum valueOf which the optimization variable is a matrix。
Further, the method further comprises the step of adjusting the pose of the camera by the constructed minimized re-projection equation so that the re-projection error of the position of the camera reaches the minimum value;
the coordinate p of the characteristic point is transformed into a point under the camera coordinate systemThe transformed coordinates are:
Wherein,is the focal length of the camera with reference to the x axis,is the focal length of the camera with reference to the y axis,is the x-axis optical center of the camera,respectively, the y-axis optical center of the camera.
Furthermore, the method also comprises the step of controlling the quantity and the quality of the feature points participating in the next frame calculation after the quality grade division is carried out, so that the least quantity of the feature points is ensured to participate in the calculation, and the system can reach the highest precision.
The invention also provides a robustness control system for multi-source fusion positioning, which comprises a first extraction and fusion module, a second extraction and fusion module, a calculation module and a division module;
the first extraction and fusion module is used for extracting visual feature points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual feature points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual feature point error and a first re-projection error according to the first position;
the second extraction and fusion module is used for extracting the point cloud characteristic points of the ground point cloud data of the same frame in the same scene by inertial navigation and laser radar to obtain three-dimensional coordinates of the point cloud characteristic points under world coordinates, and obtaining a second position posture by the fusion of the inertial navigation laser radar; calculating a point cloud feature point error and a second reprojection error according to the second pose;
the calculation module is used for calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinate of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the dividing module is used for comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the invention provides a robustness control method and a system for multi-source fusion positioning, wherein the method comprises the following steps: extracting visual characteristic points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual characteristic point error and a first reprojection error according to the first position; performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points under world coordinates, and obtaining a second pose through inertial navigation laser radar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose; calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose; and comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively performing weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and performing quality grade division after the environmental characteristics are matched. Based on a robustness control method for multi-source fusion positioning, a robustness control system for multi-source fusion positioning is also provided. The invention can control the quantity and quality of the feature points participating in the solution of the next frame, ensure that the solution is participated in with the minimum feature point quantity, can enable the unmanned vehicle system to achieve the highest accuracy which can be achieved, and compared with the scheme that all the feature points participate in the solution, the invention not only can improve the solution speed of the unmanned vehicle system, improve the environmental adaptability and the accuracy of the system, but also improve the robustness of the unmanned vehicle system, reduce the participation of points with errors or larger errors in the solution, and is convenient for the engineering application of multisource sensor data fusion.
Drawings
Fig. 1 is a frame diagram of an implementation of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention;
fig. 2 is a detailed flowchart of implementation of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention;
fig. 3 is a schematic diagram of a corner extraction pyramid in embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of a screening point according to example 1 of the present invention;
fig. 5 is a schematic diagram of a robustness control system for multi-source fusion positioning according to embodiment 2 of the present invention.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, specific example components and arrangements are described below. Moreover, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example 1
The embodiment 1 of the invention provides a robustness control method for multi-source fusion positioning, which is based on the data fusion of a multi-source sensor of inertial navigation unit (IMU), a camera and a laser radar. Wherein inertial navigation unit (IMU), camera, laser radar all are located unmanned vehicle system. Fig. 1 is a frame diagram of an implementation of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention; fig. 2 is a detailed flowchart of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention.
In step S200, a combined system initialization is performed.
The process of initializing the integrated navigation system comprises the following steps: the angular velocity of the navigation coordinate system relative to the earth coordinate system caused by the carrier velocity is not considered during attitude updating, and on the premise that the velocity of the navigation coordinate system caused by the carrier velocity is not considered during velocity updating, the attitude updating formula during open-loop initial alignment is as follows:
the formula for the speed update during the open loop initial alignment is:
wherein:representing the attitude matrix of the carrier coordinate system under the navigation coordinate system,representing the angular velocity of the carrier coordinate system relative to the earth's center inertial coordinate system,a rotation angular velocity representing a navigation system caused by the rotation of the earth;representing a projection of the specific force in a navigation coordinate system;represents the acceleration of gravity;
the error equation of the integrated navigation system is as follows:
wherein:the error of the attitude matrix is represented by,representing the angular velocity of the navigation system relative to the inertial system,the null shift error of the gyroscope is represented,the speed error is indicated in the form of a speed error,representing the projection of the specific force in the navigation coordinate system,representing the specific force value measured by the accelerometer,the accelerometer is constantly biased.
In step S210, extracting visual feature points from the same frame of image data of the same scene to obtain pixel positions of the visual feature points in an image pixel coordinate system, obtaining a first pose through visual inertial navigation fusion, and calculating a visual feature point error and a first reprojection error according to the first pose.
The process of extracting visual characteristic points from the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points in an image pixel coordinate system comprises the following steps: extracting pixel points with obvious differences in image data by adopting an improved FAST corner, wherein the pixel points with obvious differences are pixels with gray value differences larger than a set threshold value from adjacent pixels around;
and (3) improving the rotation invariance of the feature points by adopting a gray centroid algorithm, and determining the position of the ORB feature points in the image data by judging the characteristics of the pixel points through a BRIEF descriptor.
In this step, the condition of the FAST corner is determined as the degree of change in gray level of pixels in the image, and if the difference between the gray level of the pixels at the point and the gray level of the pixels adjacent to the point is greater than a set threshold, the point can be determined as the FAST corner. The distance relation of pixels needs to be defined in the extraction of FAST angular points, and the feature extraction method in the method uses the urban distance to describe pixel pointsAndthe positional relationship of (a) is as follows:
And extracting gray values of the pixel points p and surrounding pixel points with the urban area distance of 3. And setting the difference between the gray values of a plurality of continuous N pixel points and the gray value of the p point to be more than 25%, determining the point p as a FAST angular point, and setting the threshold value according to the difference of indoor illumination environments.
In the invention, in order to determine the direction information of the angular point p, the information of the centroid included angle of the surrounding image is combined. And a gray centroid algorithm is introduced to ensure that the same angular point can be still accurately identified when the image rotates. The gray value information of the corner point is represented by I (x, y), and the moment of the acquired image block B can be defined as:
the centroid of image block B can be represented by the moment of the image:
the direction of the FAST corner point can be obtained according to the connecting line of the geometric center and the centroid of the image block as follows:
adding BRIEF descriptor information into the collected corner information, describing the pixel size relationship near the corner by using 0 and 1, randomly selecting surrounding pixel points p and q by using a probability distribution method, if the value of p is larger than that of q, taking 1, and otherwise, taking 0. The binary descriptors are used for representing the environment around the corner points, the selecting speed is high, the storage is convenient, the real-time matching can be applied, and meanwhile, the image pyramid algorithm is introduced to improve the characteristic matching influence of image scaling change. Fig. 3 is a schematic diagram of a corner extraction pyramid in embodiment 1 of the present invention; the first piece of collected data is an original image, and is sequentially subjected to a down-sampling process to obtain half of the size of the original image and is placed on the topmost layer. After 8 times of downsampling, when the same corner point is determined as a feature point in each layer, the corner point is determined as a final feature point to be extracted and subsequent matching work is carried out.
Obtaining a first position posture through visual inertial navigation fusion, wherein the process of calculating the visual characteristic point error and the first reprojection error according to the first position posture comprises the following steps:
the visual inertial navigation fusion system is divided into modules such as data preprocessing, estimator initialization, VIO and the like, mainly solves the problems of visual positioning failure and the like caused by illumination change, texture lack and motion blur, performs pure visual bit resource estimation by using a sliding window method, detects feature matching between a latest frame and a frame, and matches feature tracking corresponding to stable and enough parallax in a sliding window. In order to reduce the amount of calculation, a part of the common-view frames with key information is selected to solve the problem of the amount of calculation, the frames with key information are called key frames, and then the number of the key frames contained in a window is controlled through a strategy.
All state vectors within the sliding window are defined:
wherein,representing a rotation matrix between the ith frame of inertial navigation and a world coordinate system;indicating a translation deviation;representing a velocity vector of inertial navigation in a world coordinate system;representing an accelerometer bias;indicating a rotational deviation; n, m and o respectively represent indexes of IMU state vectors, road sign points and road sign lines in the sliding window; n, M and O respectively representing the number of key frames in the sliding window, and the number of road mark points and road mark lines observed by all key frames in the sliding window;representing the inverse depth at which the kth landmark point is first observed,an orthogonal expression representing the ith characteristic line;
the sliding window based nonlinear optimization is done by minimizing the residuals of all state quantities within the sliding window, where the objective function is expressed as:
wherein,then the frame is the prior information of the frame which is removed due to long time or less common vision characteristics in the sliding window;represents the residual;representing a Jacobian matrix;is all state vectors within the sliding window;a penalty item used in an iterative constraint process for a Cauchy robust function;
pre-integration of inertial navigation between the state quantities of the ith frame and the (i + 1) th frame,representing the error between inertial navigation frames;representing inertial navigation frame observations;
representing the reprojection error of the visual feature points,representing a normalized planeAn observed value when the error of the visual feature point re-projection is calculated;
the reprojection error of the visual characteristic line;representing a normalized planeReprojection on visual characteristic lineObserved values during error calculation;
to avoid oscillation caused by too large iteration step, starting from the initial value, by increasing a minute amountAnd (4) iteratively updating until the requirements are met:
wherein,is an error vectorAboutThe jacobian matrix of (a) is,prior error before and after iteration respectively,inertial navigation pre-integral errors before and after iteration respectively,the visual characteristic point reprojection errors before and after iteration are respectively,the visual characteristic line reprojection errors before and after iteration are respectively.
In step S220, performing inertial navigation and lidar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points in world coordinates, and obtaining a second pose through inertial navigation lidar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain a detailed process of realizing three-dimensional coordinates of point cloud feature points under world coordinates, wherein the detailed process comprises the steps of performing down-sampling on the ground point cloud of each frame by using a voxel grid filtering method and marking the down-sampled ground point cloud as a ground feature point;
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness;
setting a smoothness threshold value, and taking points with smoothness larger than the smoothness threshold value as line characteristic points; points whose smoothness is less than the threshold are regarded as surface feature points.
For the extraction of the ground feature points, geometric constraint is added on the basis of the depth image, and the vertical angle is utilizedAnd elevation differenceJointly constrain ground points. Since the lidar is horizontally mounted on a carrier, the ground extraction method has the advantage that it is not strictly required that the ground is in a completely horizontal state. The ground point clouds of each frame are down-sampled to different degrees by using a voxel grid filtering method and are marked as ground characteristic pointsAnd。
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness u, and the steps are as follows:
in the same scanning, selecting five points on the left and the right of any point A;
the difference between the X coordinate of each point and the X coordinate of the point A is calculated, and all the differences are summed to obtain;
The difference between the Y coordinate of each point and the Y coordinate of the point A is calculated, and all the differences are summed to obtain;
The Z coordinate of each point is subtracted from the Z coordinate of the point A, and all the differences are summed to obtain;
Five points on the edge of each scanning point do not participate in feature point selection because the condition of calculating smoothness of the front and rear five points is not satisfied. In order to prevent feature points in one frame of data from being excessively aggregated, each scan in each frame of point cloud is divided into six parts from the starting scan to the ending scan, and a smoothness threshold value is set. Line characteristic point selection conditions: and regarding the points with smoothness larger than the threshold value as line characteristic points, if the line characteristic points exist in the five points before and after the points, skipping the points and selecting the points with smaller smoothness. Surface feature point selection conditions: regarding a point with smoothness less than a threshold as a surface feature point, if the surface feature point exists in five points before and after the point, skipping the point and selecting from points with higher smoothness. For each frame of point cloud, marking the points with larger u values as line characteristic pointsMarking the points with smaller u value as surface characteristic pointsFromIn the selection ofThe point with the maximum u valueAnd satisfyFromIn the selectionMinimum point of u valueAnd satisfy. Extracting three feature points including ground feature point, non-ground line feature point and non-ground surface feature point for each frame of laser point cloud, and storing each feature point cloud in two parts, i.e. extracting three feature pointsAnd、and、and. FIG. 4 is a schematic diagram of the screening points in example 1 of the present invention.
The following unstable noise points are eliminated:
the points at the edge of the field of view of the laser radar are deformed, a large curvature phenomenon is generated, and the quality of feature extraction is influenced, so that the points at the edge of the field of view, namely the points with the depth value larger than 60m, are removed.
And removing the point with the included angle between the laser beam and the plane too small, namely the situation that the laser beam and the plane are nearly parallel.
Part of the occluded laser points are removed, since the surface where the occluded points are located may disappear in the next frame when the lidar is moving, as in the case of point B in the figure below. Defining a depth value of point A, B as、The judgment standard of the shielding points is as follows:
obtaining a second attitude through inertial navigation laser radar fusion; the detailed process for calculating the error of the point cloud characteristic point and the second reprojection error according to the second pose comprises the following steps:
selecting key frames and passing through sliding windowsTo contain a fixed number of sub-key frames(ii) a Transformation of sub-keyframes under a world coordinate systemFinally merged into a voxel mapThe preparation method comprises the following steps of (1) performing;
is the ith frame edge feature expressed under the world coordinate system;is the i-1 frame edge characteristic expressed under the world coordinate system;is the i-n frame edge characteristic expressed under the world coordinate system;
is a world seatMarking the ith frame face characteristic represented under the mark;is the i-1 frame face characteristic expressed under the world coordinate system;is the ith-nth frame face characteristic expressed under the world coordinate system;
matching the edge characteristics and the plane characteristics, and establishing a pose solving equation by using the distance relation between the line and the point surface:
whereinRepresenting the edge coordinates of the k point at the i +1 th moment;representing the edge coordinates of the u point at the ith moment;representing the edge coordinates of the point v at the ith moment;
representing the coordinates of a u point surface at the ith moment;representing the coordinates of a point surface at the ith moment v;representing the coordinates of a k point surface at the i +1 th moment;representing the coordinates of a point and a surface at the ith moment w;
the non-linear problem in scan matching is represented as an iteratively solved linear problem:
wherein,for the transformation matrix of the key frame under the world coordinate system, A and b are represented by linearized posesThis results in a representing rotation and b translation.
In step S230, based on the pixel position of the visual feature point in the image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in the world coordinate, calculating a third pose of the laser radar and the camera by using a PNP method, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the PNP algorithm is a calculation method for point-to-point movement from 3D points to 2D points, and the method estimates the pose of a camera through n 3D points and the projection positions of the n 3D points in an image coordinate system. The calculation method of the 3D-2D point pair does not need to require the pair level constraint of the camera in the calculation process, and meanwhile, better camera motion estimation can be still realized under the condition that the number of correct matching of the feature points is small.
In three-dimensional space there are n spatial points P, the coordinates of which areI =1,2,3,4, …, n, the coordinates projected onto the pixel coordinate system being(ii) a The camera spatial point and pixel position relationship can be found:
wherein,representing the scale information of the image pixel points, the K matrix is represented as the internal parameter of the camera,rotation and translation matrices represented as lie groups for the camera;
and (3) establishing an error term equation by a least square method to estimate the pose of the camera:
formula (23) is represented byWhen taking the minimum valueOf which the optimization variable is a matrix。
The method also comprises the step that the constructed minimized re-projection equation adjusts the pose of the camera, so that the re-projection error of the position of the camera reaches the minimum value;
the coordinate p of the characteristic point is transformed into a point under the camera coordinate systemThe transformed coordinates are:
Wherein,for the focus of the camera internal reference x-axis,is the focal length of the camera internal reference y-axis,is the x-axis optical center of the camera,respectively, the y-axis optical center of the camera.
In step S230, comparing the first re-projection error, the second re-projection error and the third re-projection error, performing weight ranking on the visual feature point error, the point cloud feature point error and the phase position feature point error, and performing quality grade division after the environmental feature matching.
Through the steps, the feature points extracted by the camera and the laser radar and the position posture result obtained based on the feature point calculation are obtained, the feature point error and the reprojection error are calculated according to the obtained result, the error obtained by each fusion method is compared, the weight ranking is carried out on each feature point by adopting a voting mechanism, the corresponding threshold value is set according to the application scene, the feature points are classified according to the voting result, the information matching quality grade division is carried out on the scene space-time associated information provided by the vision and the radar, and the scene space-time associated information provided by the vision and the radar is divided into three information sets: a quality primary set, a quality secondary set and a quality tertiary set. The quality secondary set comprises a quality primary set, and the quality tertiary set comprises a quality primary set and a quality secondary set. And adopting a voting mechanism strategy, and respectively utilizing association information sets with different quality grades to perform fusion positioning. And dynamically maintaining various different information fusion modes, ensuring that an effective fusion mode exists, and performing reliability joint evaluation and screening to finish the output selection of the effective mode. And selecting the feature points with the top rank to mainly participate in the resolving pose at the next moment, and simultaneously making a decision according to the number of the feature points at the next moment to determine whether to select relatively weak feature points to participate in the resolving at the next moment.
The robustness control method for multi-source fusion positioning, which is provided by the embodiment 1 of the invention, can control the number and quality of feature points participating in calculation of the next frame, ensure that the feature points participate in the calculation with the minimum number, enable the unmanned vehicle system to achieve the highest accuracy, and compared with the previous scheme that all feature points participate in the calculation, the robustness of the system is improved, the environmental adaptability and accuracy of the unmanned vehicle system are improved, the number of errors or points with larger errors participate in the calculation is reduced, and the engineering application of multi-source sensor data fusion is facilitated.
Example 2
Based on the robustness control method for multi-source fusion positioning provided by the embodiment 1 of the invention, the embodiment 2 of the invention also provides a robustness control system for multi-source fusion positioning, and as shown in fig. 5, the robustness control system for multi-source fusion positioning in the embodiment of the invention is a schematic diagram, and the system comprises a first extraction and fusion module, a second extraction and fusion module, a calculation module and a division module;
the first extraction and fusion module is used for extracting visual feature points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual feature points under an image pixel coordinate system, obtaining a first position posture through visual inertial navigation fusion, and calculating a visual feature point error and a first re-projection error according to the first position posture;
the second extraction and fusion module is used for extracting the point cloud characteristic points of the ground point cloud data of the same frame in the same scene by inertial navigation and laser radar to obtain three-dimensional coordinates of the point cloud characteristic points in world coordinates, and obtaining a second position posture by the fusion of the inertial navigation laser radar; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
the calculation module is used for calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the dividing module is used for comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
The system also comprises an initialization module used for initializing the combined system and executing the corresponding first extraction and fusion module, the second extraction and fusion module, the calculation module and the division module after the initialization module is completed.
The process implemented by each module in embodiment 2 of the present invention is the same as the process in the corresponding step in embodiment 1, and is not described in detail here.
The robustness control system for multi-source fusion positioning, which is provided by the embodiment 2 of the invention, correspondingly modularizes the steps of the method in the embodiment 1, can control the number and quality of the feature points participating in the calculation of the next frame, ensures that the minimum feature points participate in the calculation, can enable the unmanned vehicle system to reach the highest accuracy, and compared with the previous scheme that all the feature points participate in the calculation, the robustness control system not only can improve the calculation speed of the unmanned vehicle system, improve the environmental adaptability and accuracy of the system, but also can improve the robustness of the system, reduce the participation of the points with errors or larger errors in the calculation, and is convenient for the engineering application of multi-source sensor data fusion.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include elements inherent in the list. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. In addition, parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of corresponding technical solutions in the prior art, are not described in detail so as to avoid redundant description.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, the scope of the present invention is not limited thereto. Various modifications and alterations will occur to those skilled in the art based on the foregoing description. And are neither required nor exhaustive of all embodiments. On the basis of the technical scheme of the invention, various modifications or changes which can be made by a person skilled in the art without creative efforts are still within the protection scope of the invention.
Claims (10)
1. A robustness control method for multi-source fusion positioning is characterized by comprising the following steps:
extracting visual characteristic points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual characteristic point error and a first reprojection error according to the first position;
performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points under world coordinates, and obtaining a second pose through inertial navigation laser radar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
and comparing the first reprojection error, the second reprojection error and the third reprojection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
2. The robustness control method for multi-source fusion localization as recited in claim 1, further comprising before feature point extraction: initializing a combined navigation system;
the process of initializing the integrated navigation system comprises the following steps: the angular velocity of the navigation coordinate system relative to the earth coordinate system caused by the carrier velocity is not considered during attitude updating, and on the premise that the velocity of the navigation coordinate system caused by the carrier velocity is not considered during velocity updating, the attitude updating formula during open-loop initial alignment is as follows:
the formula for the speed update during the open loop initial alignment is:
wherein:representing the attitude matrix of the carrier coordinate system under the navigation coordinate system,representing the angular velocity of the carrier coordinate system relative to the earth's center inertial coordinate system,a rotation angular velocity representing a navigation system caused by the rotation of the earth;representing a projection of the specific force in a navigation coordinate system;represents the acceleration of gravity;
the error equation of the integrated navigation system is as follows:
wherein:the error of the attitude matrix is represented by,representing the angular velocity of the navigation system relative to the inertial system,the null shift error of the gyroscope is represented,the speed error is indicated in the form of a speed error,representing the projection of the specific force in the navigation coordinate system,representing the specific force value measured by the accelerometer,the accelerometer is constantly biased.
3. The robustness control method for multi-source fusion localization as claimed in claim 1, wherein the image data feature extraction process comprises:
extracting pixel points with obvious difference in image data by adopting an improved FAST corner, wherein the gray value difference between the pixel points with obvious difference and surrounding adjacent pixels is larger than a set threshold;
and (3) improving the rotation invariance of the feature points by adopting a gray centroid algorithm, and determining the position of the ORB feature points in the image data by judging the characteristics of the pixel points through a BRIEF descriptor.
4. The robust control method for multi-source fusion positioning as claimed in claim 1, wherein the detailed process of the visual inertial navigation fusion comprises:
all state vectors within the sliding window are defined:
wherein,representing a rotation matrix between the ith frame of inertial navigation and a world coordinate system;representing a translational deviation;representing a velocity vector of inertial navigation in a world coordinate system;representing an accelerometer bias;indicating a rotational deviation; n, m and o respectively represent indexes of IMU state vectors, road sign points and road sign lines in the sliding window; n, M and O respectively represent the number of key frames in the sliding window, and the number of road mark points and road mark lines observed by all key frames in the sliding window;representing the inverse depth at which the kth landmark point is first observed,an orthogonal expression representing the ith characteristic line;
the sliding window based nonlinear optimization is done by minimizing the residuals of all state quantities within the sliding window, where the objective function is expressed as:
wherein,then the frame is the prior information of the frame which is removed due to long time or less common vision characteristics in the sliding window;represents the residual;express yaA gram ratio matrix;is all state vectors within the sliding window;a penalty item used in an iterative constraint process for a Cauchy robust function;
is the inertial navigation pre-integration between the state quantities of the ith frame and the (i + 1) th frame,representing the error between inertial navigation frames;representing inertial navigation frame observations;
representing the reprojection error of the visual feature points,representing a normalized planeAn observed value when the error of the visual feature point re-projection is calculated;
the reprojection error of the visual characteristic line;representing a normalized planeAn observed value during the calculation of the visual characteristic line reprojection error;
to avoid oscillation caused by the growth of the iteration step, starting from the initial value, by increasing a minute amountAnd (4) iteratively updating until the requirements are met:
wherein,is an error vectorAboutThe jacobian matrix of (a) is,prior errors before and after the iteration respectively,respectively inertial navigation pre-integral errors before and after iteration,the visual characteristic point reprojection errors before and after iteration are respectively,the visual characteristic line reprojection errors before and after iteration are respectively.
5. The robustness control method for multi-source fusion positioning according to claim 1, wherein the inertial navigation and lidar characteristic point extraction process comprises:
carrying out down-sampling on the ground point cloud of each frame by using a voxel grid filtering method, and marking the down-sampled ground point cloud as a ground characteristic point;
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness;
setting a smoothness threshold value, and taking points with smoothness larger than the smoothness threshold value as line characteristic points; points whose smoothness is less than the threshold are regarded as surface feature points.
6. The robust control method for multi-source fusion positioning as claimed in claim 5, wherein the inertial navigation lidar fusion process comprises:
selecting a key frame and including a fixed number of sub-key frames by sliding a window(ii) a Transformation of sub-keyframes under a world coordinate systemFinally merged into a voxel mapPerforming the following steps;
is the ith frame edge feature expressed under the world coordinate system;is the i-1 frame edge characteristic expressed under the world coordinate system;is the i-n frame edge characteristic expressed under the world coordinate system;
is the ith frame face characteristic expressed under the world coordinate system;is the i-1 frame face characteristic expressed under the world coordinate system;is the ith-nth frame face characteristic expressed under the world coordinate system;
matching the edge characteristics and the plane characteristics, and establishing a pose solving equation by using the distance relation between the line and the point surface:
whereinRepresenting the edge coordinates of the k point at the i +1 th moment;representing the edge coordinates of the u point at the ith moment;representing the edge coordinates of the point v at the ith moment;
representing the coordinates of a u point surface at the ith moment;representing the coordinates of a point surface at the ith moment v;representing the coordinates of a k point surface at the i +1 th moment;representing the coordinates of a point and a surface at the ith moment w;
the non-linear problem in scan matching is represented as an iteratively solved linear problem:
7. The robustness control method for multi-source fusion positioning according to claim 6, wherein the process of extracting the feature points of the camera and the lidar comprises:
in three-dimensional space there are n spatial points P, the coordinates of which areI =1,2,3,4, …, n, the coordinates projected onto the pixel coordinate system being(ii) a The camera spatial point and pixel position relationship can be found:
wherein,representing the scale information of the image pixel points, the K matrix is represented as the internal parameter of the camera,rotation and translation matrices represented as lie groups for the camera;
and (3) establishing an error term equation by a least square method to estimate the pose of the camera:
8. The robustness control method for multi-source fusion positioning according to claim 7, further comprising constructing a minimized re-projection equation to adjust the pose of the camera so that the re-projection error of the camera position reaches a minimum value;
the coordinate p of the characteristic point is transformed into a point under the camera coordinate systemThe transformed coordinates are:
9. The robustness control method for multi-source fusion positioning according to claim 1, further comprising controlling the number and quality of feature points participating in the next frame of solution after performing quality level division, and ensuring that the solution is participated in with the least number of feature points, so that the system achieves the highest precision.
10. A robustness control system for multi-source fusion positioning is characterized by comprising a first extraction and fusion module, a second extraction and fusion module, a calculation module and a division module;
the first extraction and fusion module is used for extracting visual feature points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual feature points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual feature point error and a first re-projection error according to the first position;
the second extraction and fusion module is used for extracting the point cloud characteristic points of the ground point cloud data of the same frame in the same scene by inertial navigation and laser radar to obtain three-dimensional coordinates of the point cloud characteristic points under world coordinates, and obtaining a second position posture by the fusion of the inertial navigation laser radar; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
the calculation module is used for calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinate of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the dividing module is used for comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210562354.XA CN114648584B (en) | 2022-05-23 | 2022-05-23 | Robustness control method and system for multi-source fusion positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210562354.XA CN114648584B (en) | 2022-05-23 | 2022-05-23 | Robustness control method and system for multi-source fusion positioning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114648584A true CN114648584A (en) | 2022-06-21 |
CN114648584B CN114648584B (en) | 2022-08-30 |
Family
ID=81997550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210562354.XA Active CN114648584B (en) | 2022-05-23 | 2022-05-23 | Robustness control method and system for multi-source fusion positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114648584B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115442584A (en) * | 2022-08-30 | 2022-12-06 | 中国传媒大学 | Multi-sensor fusion irregular surface dynamic projection method |
CN115774265A (en) * | 2023-02-15 | 2023-03-10 | 江苏集萃清联智控科技有限公司 | Two-dimensional code and laser radar fusion positioning method and device for industrial robot |
CN117911520A (en) * | 2022-10-12 | 2024-04-19 | 北京三快在线科技有限公司 | Camera internal parameter calibration method and automatic driving equipment |
CN118797214A (en) * | 2024-09-10 | 2024-10-18 | 电子科技大学 | Unmanned system flexible collaboration method for intelligent seal detection scene |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109465832A (en) * | 2018-12-18 | 2019-03-15 | 哈尔滨工业大学(深圳) | High-precision vision and the tight fusion and positioning method of IMU and system |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110428467A (en) * | 2019-07-30 | 2019-11-08 | 四川大学 | A kind of camera, imu and the united robot localization method of laser radar |
CN110706279A (en) * | 2019-09-27 | 2020-01-17 | 清华大学 | Global position and pose estimation method based on information fusion of global map and multiple sensors |
CN111207774A (en) * | 2020-01-17 | 2020-05-29 | 山东大学 | Method and system for laser-IMU external reference calibration |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111983639A (en) * | 2020-08-25 | 2020-11-24 | 浙江光珀智能科技有限公司 | Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU |
CN112347840A (en) * | 2020-08-25 | 2021-02-09 | 天津大学 | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method |
CN112598757A (en) * | 2021-03-03 | 2021-04-02 | 之江实验室 | Multi-sensor time-space calibration method and device |
CN113436260A (en) * | 2021-06-24 | 2021-09-24 | 华中科技大学 | Mobile robot pose estimation method and system based on multi-sensor tight coupling |
CN113721260A (en) * | 2021-08-26 | 2021-11-30 | 南京邮电大学 | Online combined calibration method for laser radar, binocular camera and inertial navigation |
-
2022
- 2022-05-23 CN CN202210562354.XA patent/CN114648584B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109465832A (en) * | 2018-12-18 | 2019-03-15 | 哈尔滨工业大学(深圳) | High-precision vision and the tight fusion and positioning method of IMU and system |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110428467A (en) * | 2019-07-30 | 2019-11-08 | 四川大学 | A kind of camera, imu and the united robot localization method of laser radar |
CN110706279A (en) * | 2019-09-27 | 2020-01-17 | 清华大学 | Global position and pose estimation method based on information fusion of global map and multiple sensors |
CN111207774A (en) * | 2020-01-17 | 2020-05-29 | 山东大学 | Method and system for laser-IMU external reference calibration |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111983639A (en) * | 2020-08-25 | 2020-11-24 | 浙江光珀智能科技有限公司 | Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU |
CN112347840A (en) * | 2020-08-25 | 2021-02-09 | 天津大学 | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method |
CN112598757A (en) * | 2021-03-03 | 2021-04-02 | 之江实验室 | Multi-sensor time-space calibration method and device |
CN113436260A (en) * | 2021-06-24 | 2021-09-24 | 华中科技大学 | Mobile robot pose estimation method and system based on multi-sensor tight coupling |
CN113721260A (en) * | 2021-08-26 | 2021-11-30 | 南京邮电大学 | Online combined calibration method for laser radar, binocular camera and inertial navigation |
Non-Patent Citations (1)
Title |
---|
THIEN-MINH NGUYEN ET AL.: "VIRAL SLAM: Tightly Coupled Camera-IMU-UWB-Lidar SLAM", 《ARXIV:2105.03296V3》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115442584A (en) * | 2022-08-30 | 2022-12-06 | 中国传媒大学 | Multi-sensor fusion irregular surface dynamic projection method |
CN115442584B (en) * | 2022-08-30 | 2023-08-18 | 中国传媒大学 | Multi-sensor fusion type special-shaped surface dynamic projection method |
CN117911520A (en) * | 2022-10-12 | 2024-04-19 | 北京三快在线科技有限公司 | Camera internal parameter calibration method and automatic driving equipment |
CN115774265A (en) * | 2023-02-15 | 2023-03-10 | 江苏集萃清联智控科技有限公司 | Two-dimensional code and laser radar fusion positioning method and device for industrial robot |
CN118797214A (en) * | 2024-09-10 | 2024-10-18 | 电子科技大学 | Unmanned system flexible collaboration method for intelligent seal detection scene |
Also Published As
Publication number | Publication date |
---|---|
CN114648584B (en) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114648584B (en) | Robustness control method and system for multi-source fusion positioning | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
CN111583369B (en) | Laser SLAM method based on facial line angular point feature extraction | |
CN110068335B (en) | Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment | |
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN110223348A (en) | Robot scene adaptive bit orientation estimation method based on RGB-D camera | |
CN111024066A (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN113432600A (en) | Robot instant positioning and map construction method and system based on multiple information sources | |
CN106595659A (en) | Map merging method of unmanned aerial vehicle visual SLAM under city complex environment | |
CN114419147A (en) | Rescue robot intelligent remote human-computer interaction control method and system | |
WO2024109002A1 (en) | Bionic-vision multi-source-information unmanned intelligent sensing platform | |
CN116619358A (en) | Self-adaptive positioning optimization and mapping method for autonomous mining robot | |
CN115272596A (en) | Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN113674412A (en) | Pose fusion optimization-based indoor map construction method and system and storage medium | |
CN112731503B (en) | Pose estimation method and system based on front end tight coupling | |
CN116989763B (en) | Fusion positioning and mapping method for amphibious unmanned system | |
CN116681733B (en) | Near-distance real-time pose tracking method for space non-cooperative target | |
CN112945233A (en) | Global drift-free autonomous robot simultaneous positioning and map building method | |
CN117152228A (en) | Self-supervision image depth estimation method based on channel self-attention mechanism | |
WO2023030062A1 (en) | Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program | |
CN116045965A (en) | Multi-sensor-integrated environment map construction method | |
CN116147618A (en) | Real-time state sensing method and system suitable for dynamic environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |