[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114648584A - Robustness control method and system for multi-source fusion positioning - Google Patents

Robustness control method and system for multi-source fusion positioning Download PDF

Info

Publication number
CN114648584A
CN114648584A CN202210562354.XA CN202210562354A CN114648584A CN 114648584 A CN114648584 A CN 114648584A CN 202210562354 A CN202210562354 A CN 202210562354A CN 114648584 A CN114648584 A CN 114648584A
Authority
CN
China
Prior art keywords
error
point
representing
points
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210562354.XA
Other languages
Chinese (zh)
Other versions
CN114648584B (en
Inventor
刘晓慧
杨东清
张宝庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Technology Research Institute of Beijing Institute of Technology
Original Assignee
Advanced Technology Research Institute of Beijing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Technology Research Institute of Beijing Institute of Technology filed Critical Advanced Technology Research Institute of Beijing Institute of Technology
Priority to CN202210562354.XA priority Critical patent/CN114648584B/en
Publication of CN114648584A publication Critical patent/CN114648584A/en
Application granted granted Critical
Publication of CN114648584B publication Critical patent/CN114648584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Navigation (AREA)

Abstract

The invention provides a robustness control method and a system for multi-source fusion positioning, wherein the method comprises the following steps: extracting visual feature points of the same frame of image data of the same scene, obtaining a first position posture through visual inertial navigation fusion, and calculating a visual feature point error and a first reprojection error according to the first position posture; performing inertial navigation and laser radar feature point extraction on point cloud data of the same frame in the same scene, and obtaining a second attitude through inertial navigation laser radar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose; calculating a third pose of the laser radar and the camera, and calculating a camera position feature point error and a third reprojection error according to the third pose; and (4) comparing the reprojection errors, and performing weight ranking on the characteristic point errors respectively to perform quality grade division. Based on the method, a robustness control system is also provided. The method can improve the resolving speed of the system, improve the environmental adaptability and the accuracy of the system and improve the robustness of the system.

Description

Robustness control method and system for multi-source fusion positioning
Technical Field
The invention belongs to the technical field of multi-source fusion positioning of unmanned vehicles, and particularly relates to a robustness control method and system for multi-source fusion positioning.
Background
The multi-source fusion positioning technology becomes a mainstream technology of unmanned vehicle position navigation, and effectively overcomes the defects of single sensor error accumulation, poor stability and the like on the unmanned vehicle. The multi-source fusion positioning technology fuses isomorphic or heterogeneous navigation information from different navigation sources according to a corresponding fusion algorithm based on an information fusion technology, so that an optimal fusion result is obtained. Compared with the traditional single navigation source, the multi-source fusion navigation can fully utilize the advantages of each navigation source, thereby providing the best unmanned vehicle positioning and navigation service. Therefore, the robustness research of the multi-source fusion positioning system is necessary, which is helpful for the long-term development of the multi-source fusion positioning technology.
Prior art robustness studies on multi-source fusion localization techniques have disclosed a number of approaches. The first scheme provides a multisource sensor information fusion navigation system based on inertial navigation, odometer and ultra wide band, a factor graph model is used for establishing a fusion framework, each sensor information is abstracted into factors, and an incremental smooth optimization algorithm (ISAM 2) based on a Bayesian tree structure is adopted for processing and fusing the sensor information. The second scheme provides an indoor positioning method for fusing an IMU (inertial measurement Unit) and UWB (ultra Wide band) information by adopting an Unscented Kalman Filtering (UKF) technology, so that error accumulation of the IMU can be effectively inhibited, and the solving precision of a nonlinear observation equation is effectively improved. A third approach uses extended kalman filtering to fuse UWB (high precision positioning unit), IMU, vision sensors, which allows the use of low cost Inertial Measurement Unit (IMU) in the prediction step, integrating the vision ranging method into marker detection to a precision of 10 cm.
In summary, the common methods for information fusion of multiple sensors in the prior art can be basically summarized into four categories, namely probability estimation algorithm, recognition algorithm, parameter algorithm and artificial intelligence algorithm. Each large class of algorithms is composed of a plurality of different typical methods, the essential idea of multi-sensor information fusion is weighted average, and different weights are given to data acquired by different sensors according to accuracy. Many traditional classical filtering algorithm ideas originate from Bayesian filtering, and model algorithms such as particle filtering, Kalman filtering, hidden Markov algorithm, dynamic Bayesian network and the like are practical representations in filtering algorithms. The core of the Bayesian algorithm is to calculate corresponding prior probability distribution and information fusion characteristic values according to known sensor input data, wherein the characteristic values describe the whole environment characteristics provided by a prior model. However, the problems of asynchronous sensor information sampling time and the like exist in the navigation system at present. And navigation systems in the prior art generally integrate multiple sensors. Different sensors can be influenced by different working environments, and the problems of time delay and the like also exist in the transmission process of sensor information. In the prior art, extrapolation, interpolation, curve fitting and other methods are mostly adopted in the information fusion process. However, these methods cannot guarantee high synchronization accuracy. Different working environments and tasks also have requirements on selection of the sensors, and in addition, the combination mode of the sensors is related to the operating environment and the tasks, so that the problems of insufficient information fusion, insufficient exertion of the characteristic advantages of the sensors and the like exist. Therefore, the traditional data fusion method lacks certain self-adaptive capacity and is difficult to enable the unmanned vehicle to achieve satisfactory performance in a complex environment.
Disclosure of Invention
In order to solve the technical problem, the invention provides a robustness control method and system for multi-source fusion positioning. The resolving speed of the unmanned vehicle system can be increased, the environmental adaptability and the accuracy of the unmanned vehicle system are improved, the robustness of the unmanned vehicle system is improved, and wrong points or points with larger errors are reduced to participate in resolving.
In order to achieve the purpose, the invention adopts the following technical scheme:
a robustness control method for multi-source fusion positioning comprises the following steps:
extracting visual characteristic points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual characteristic point error and a first reprojection error according to the first position;
performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points under world coordinates, and obtaining a second pose through inertial navigation laser radar fusion; calculating a point cloud feature point error and a second reprojection error according to the second pose;
calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
and comparing the first reprojection error, the second reprojection error and the third reprojection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
Further, before the feature point extraction, the method further comprises: initializing a combined navigation system;
the process of initializing the integrated navigation system comprises the following steps: the angular velocity of the navigation coordinate system relative to the earth coordinate system caused by the carrier velocity is not considered during attitude updating, and on the premise that the velocity of the navigation coordinate system caused by the carrier velocity is not considered during velocity updating, the attitude updating formula during open-loop initial alignment is as follows:
Figure 344516DEST_PATH_IMAGE001
(6)
the formula for the speed update during the open loop initial alignment is:
Figure 256365DEST_PATH_IMAGE002
(7)
wherein:
Figure 949514DEST_PATH_IMAGE003
representing the attitude matrix of the carrier coordinate system under the navigation coordinate system,
Figure 745432DEST_PATH_IMAGE004
representing the angular velocity of the carrier coordinate system relative to the earth's center inertial coordinate system,
Figure 498624DEST_PATH_IMAGE005
a rotation angular velocity representing a navigation system caused by the rotation of the earth;
Figure 363681DEST_PATH_IMAGE006
representing a projection of the specific force in a navigation coordinate system;
Figure 594942DEST_PATH_IMAGE007
represents the acceleration of gravity;
the error equation of the integrated navigation system is as follows:
Figure 245367DEST_PATH_IMAGE008
(8)
Figure 169460DEST_PATH_IMAGE009
(9)
wherein:
Figure 787392DEST_PATH_IMAGE010
the error of the attitude matrix is represented by,
Figure 822344DEST_PATH_IMAGE011
representing the angular velocity of the navigation system relative to the inertial system,
Figure 327275DEST_PATH_IMAGE012
the null shift error of the gyroscope is represented,
Figure 422270DEST_PATH_IMAGE013
the speed error is indicated in the form of a speed error,
Figure 527498DEST_PATH_IMAGE014
representing the projection of the specific force in the navigation coordinate system,
Figure 100562DEST_PATH_IMAGE015
representing the specific force value measured by the accelerometer,
Figure 725578DEST_PATH_IMAGE016
the accelerometer is constantly biased.
Further, the image data feature extraction process includes:
extracting pixel points with obvious difference in image data by adopting an improved FAST corner, wherein the pixel points with obvious difference are pixels with gray value difference larger than a set threshold value from adjacent pixels around;
and (3) improving the rotation invariance of the feature points by adopting a gray centroid algorithm, and determining the position of the ORB feature points in the image data by judging the characteristics of the pixel points through a BRIEF descriptor.
Further, the detailed process of the visual inertial navigation fusion comprises:
all state vectors within the sliding window are defined:
Figure 788212DEST_PATH_IMAGE017
(14)
wherein,
Figure 865890DEST_PATH_IMAGE018
representing a rotation matrix between the ith frame of inertial navigation and a world coordinate system;
Figure 760421DEST_PATH_IMAGE019
indicating a translation deviation;
Figure 239944DEST_PATH_IMAGE020
representing a velocity vector of inertial navigation in a world coordinate system;
Figure 676741DEST_PATH_IMAGE021
representing an accelerometer bias;
Figure 241715DEST_PATH_IMAGE022
indicating a rotational deviation; n, m and o respectively represent indexes of IMU state vectors, road sign points and road sign lines in the sliding window; n, M and O respectively represent the number of key frames in the sliding window, and the number of road mark points and road mark lines observed by all key frames in the sliding window;
Figure 405849DEST_PATH_IMAGE023
representing the inverse depth at which the kth landmark point is first observed,
Figure 739878DEST_PATH_IMAGE024
an orthogonal expression representing the ith characteristic line;
the sliding window based nonlinear optimization is done by minimizing the residuals of all state quantities within the sliding window, where the objective function is expressed as:
Figure 347577DEST_PATH_IMAGE025
Figure 399847DEST_PATH_IMAGE026
;(15)
wherein,
Figure 367671DEST_PATH_IMAGE027
the prior information of the frames which are removed due to long time or less common vision characteristics in the sliding window is obtained;
Figure 821787DEST_PATH_IMAGE028
represents the residual;
Figure 600387DEST_PATH_IMAGE029
representing a Jacobian matrix;
Figure 139952DEST_PATH_IMAGE030
is all state vectors within the sliding window;
Figure 911468DEST_PATH_IMAGE031
a penalty item used in an iterative constraint process for a Cauchy robust function;
Figure 16828DEST_PATH_IMAGE032
pre-integration of inertial navigation between the state quantities of the ith frame and the (i + 1) th frame,
Figure 966329DEST_PATH_IMAGE033
representing the error between inertial navigation frames;
Figure 727612DEST_PATH_IMAGE034
representing inertial navigation frame observations;
Figure 53551DEST_PATH_IMAGE035
representing the reprojection error of the visual feature points,
Figure 468876DEST_PATH_IMAGE036
representing a normalized plane
Figure 589279DEST_PATH_IMAGE037
An observed value when the error of the visual feature point re-projection is calculated;
Figure 103437DEST_PATH_IMAGE038
the reprojection error of the visual characteristic line;
Figure 967488DEST_PATH_IMAGE039
representing a normalized plane
Figure 968810DEST_PATH_IMAGE037
Reprojection error on visual characteristic lineAn observed value at the time of calculation;
to avoid oscillation caused by the growth of the iteration step, starting from the initial value, by increasing a minute amount
Figure 525694DEST_PATH_IMAGE040
And (4) iteratively updating until the requirements are met:
Figure 261569DEST_PATH_IMAGE041
(16)
wherein,
Figure 663731DEST_PATH_IMAGE042
is an error vector
Figure 50719DEST_PATH_IMAGE043
About
Figure 778504DEST_PATH_IMAGE044
The jacobian matrix of (a) is,
Figure 736095DEST_PATH_IMAGE045
prior error before and after iteration respectively,
Figure 207528DEST_PATH_IMAGE046
respectively inertial navigation pre-integral errors before and after iteration,
Figure 449022DEST_PATH_IMAGE047
the visual characteristic point reprojection errors before and after iteration are respectively,
Figure 82129DEST_PATH_IMAGE048
the visual characteristic line reprojection errors before and after iteration are respectively.
Further, the inertial navigation and lidar characteristic point extraction process comprises the following steps:
carrying out down-sampling on the ground point cloud of each frame by using a voxel grid filtering method, and marking the down-sampled ground point cloud as a ground characteristic point;
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness;
setting a smoothness threshold value, and taking points with smoothness larger than the smoothness threshold value as line characteristic points; points whose smoothness is less than the threshold are regarded as surface feature points.
Further, the inertial navigation lidar fusion process comprises the following steps:
extracting lidar frame composed of all characteristics
Figure 589334DEST_PATH_IMAGE049
Figure 598878DEST_PATH_IMAGE050
(17)
Wherein the edge feature extracted at the ith time
Figure 445611DEST_PATH_IMAGE051
Figure 591614DEST_PATH_IMAGE052
Extracting plane features for the ith moment;
selecting a key frame and including a fixed number of sub-key frames by sliding window
Figure 523798DEST_PATH_IMAGE053
(ii) a Transformation of sub-keyframes under a world coordinate system
Figure 337034DEST_PATH_IMAGE054
Finally merged into a voxel map
Figure 303852DEST_PATH_IMAGE055
Performing the following steps;
Figure 528029DEST_PATH_IMAGE056
(18)
wherein,
Figure 213089DEST_PATH_IMAGE057
represented as an edge feature voxel map;
Figure 564435DEST_PATH_IMAGE058
represented as a planar feature voxel map;
Figure 385761DEST_PATH_IMAGE059
Figure 780839DEST_PATH_IMAGE060
is the ith frame edge feature expressed under the world coordinate system;
Figure 687615DEST_PATH_IMAGE061
is the i-1 frame edge characteristic expressed under the world coordinate system;
Figure 842653DEST_PATH_IMAGE062
is the i-n frame edge characteristic expressed under the world coordinate system;
Figure 518485DEST_PATH_IMAGE063
is the ith frame face characteristic expressed under the world coordinate system;
Figure 350044DEST_PATH_IMAGE064
is the i-1 frame face characteristic expressed under the world coordinate system;
Figure 744116DEST_PATH_IMAGE065
is the ith-nth frame face characteristic expressed under the world coordinate system;
matching the edge characteristics and the plane characteristics, and establishing a pose solving equation by using the distance relation between the line and the point surface:
Figure 171686DEST_PATH_IMAGE066
(19)
Figure 967604DEST_PATH_IMAGE067
(20)
wherein
Figure 238573DEST_PATH_IMAGE068
Representing the edge coordinates of the k point at the i +1 th moment;
Figure 588783DEST_PATH_IMAGE069
representing the edge coordinates of the u point at the ith moment;
Figure 85623DEST_PATH_IMAGE070
representing the edge coordinates of the v point at the ith moment;
Figure 736047DEST_PATH_IMAGE071
representing the coordinates of a u point surface at the ith moment;
Figure 909408DEST_PATH_IMAGE072
representing the coordinates of a point surface at the ith moment v;
Figure 278073DEST_PATH_IMAGE073
representing the coordinates of a k point surface at the i +1 th moment;
Figure 313025DEST_PATH_IMAGE074
representing the coordinates of a point and a surface at the ith moment w;
the non-linear problem in scan matching is represented as an iteratively solved linear problem:
Figure 817956DEST_PATH_IMAGE075
(21)
wherein,
Figure 709688DEST_PATH_IMAGE076
for the transformation matrix of the key frame under the world coordinate system, A and b are represented by linearized poses
Figure 549337DEST_PATH_IMAGE077
Obtained, a denotes rotation and b denotes translation.
Further, the process of extracting the feature points of the camera and the lidar includes:
in three-dimensional space there are n spatial points P, the coordinates of which are
Figure 387980DEST_PATH_IMAGE078
I =1,2,3,4, …, n, the coordinates projected onto the pixel coordinate system being
Figure 12997DEST_PATH_IMAGE079
(ii) a The camera spatial point and pixel position relationship can be found:
Figure 278893DEST_PATH_IMAGE080
(22)
wherein,
Figure 605838DEST_PATH_IMAGE081
representing the scale information of the image pixel points, the K matrix is represented as the internal parameter of the camera,
Figure 982593DEST_PATH_IMAGE082
rotation and translation matrices represented as lie groups for the camera;
and (3) establishing an error term equation by a least square method to estimate the pose of the camera:
Figure 196536DEST_PATH_IMAGE083
(23)
formula (23) is represented by
Figure 633334DEST_PATH_IMAGE084
When taking the minimum value
Figure 450505DEST_PATH_IMAGE085
Of which the optimization variable is a matrix
Figure 630950DEST_PATH_IMAGE086
Further, the method further comprises the step of adjusting the pose of the camera by the constructed minimized re-projection equation so that the re-projection error of the position of the camera reaches the minimum value;
the coordinate p of the characteristic point is transformed into a point under the camera coordinate system
Figure 27296DEST_PATH_IMAGE087
The transformed coordinates are:
Figure 103837DEST_PATH_IMAGE088
(24)
relative to
Figure 343057DEST_PATH_IMAGE089
The camera projection model of (2) is:
Figure 592773DEST_PATH_IMAGE090
can be unfolded to obtain
Figure 499418DEST_PATH_IMAGE091
(25)
Wherein,
Figure 278018DEST_PATH_IMAGE092
is the focal length of the camera with reference to the x axis,
Figure 552005DEST_PATH_IMAGE093
is the focal length of the camera with reference to the y axis,
Figure 74253DEST_PATH_IMAGE094
is the x-axis optical center of the camera,
Figure 632142DEST_PATH_IMAGE095
respectively, the y-axis optical center of the camera.
Furthermore, the method also comprises the step of controlling the quantity and the quality of the feature points participating in the next frame calculation after the quality grade division is carried out, so that the least quantity of the feature points is ensured to participate in the calculation, and the system can reach the highest precision.
The invention also provides a robustness control system for multi-source fusion positioning, which comprises a first extraction and fusion module, a second extraction and fusion module, a calculation module and a division module;
the first extraction and fusion module is used for extracting visual feature points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual feature points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual feature point error and a first re-projection error according to the first position;
the second extraction and fusion module is used for extracting the point cloud characteristic points of the ground point cloud data of the same frame in the same scene by inertial navigation and laser radar to obtain three-dimensional coordinates of the point cloud characteristic points under world coordinates, and obtaining a second position posture by the fusion of the inertial navigation laser radar; calculating a point cloud feature point error and a second reprojection error according to the second pose;
the calculation module is used for calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinate of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the dividing module is used for comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the invention provides a robustness control method and a system for multi-source fusion positioning, wherein the method comprises the following steps: extracting visual characteristic points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual characteristic point error and a first reprojection error according to the first position; performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points under world coordinates, and obtaining a second pose through inertial navigation laser radar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose; calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose; and comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively performing weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and performing quality grade division after the environmental characteristics are matched. Based on a robustness control method for multi-source fusion positioning, a robustness control system for multi-source fusion positioning is also provided. The invention can control the quantity and quality of the feature points participating in the solution of the next frame, ensure that the solution is participated in with the minimum feature point quantity, can enable the unmanned vehicle system to achieve the highest accuracy which can be achieved, and compared with the scheme that all the feature points participate in the solution, the invention not only can improve the solution speed of the unmanned vehicle system, improve the environmental adaptability and the accuracy of the system, but also improve the robustness of the unmanned vehicle system, reduce the participation of points with errors or larger errors in the solution, and is convenient for the engineering application of multisource sensor data fusion.
Drawings
Fig. 1 is a frame diagram of an implementation of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention;
fig. 2 is a detailed flowchart of implementation of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention;
fig. 3 is a schematic diagram of a corner extraction pyramid in embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of a screening point according to example 1 of the present invention;
fig. 5 is a schematic diagram of a robustness control system for multi-source fusion positioning according to embodiment 2 of the present invention.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, specific example components and arrangements are described below. Moreover, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example 1
The embodiment 1 of the invention provides a robustness control method for multi-source fusion positioning, which is based on the data fusion of a multi-source sensor of inertial navigation unit (IMU), a camera and a laser radar. Wherein inertial navigation unit (IMU), camera, laser radar all are located unmanned vehicle system. Fig. 1 is a frame diagram of an implementation of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention; fig. 2 is a detailed flowchart of a robustness control method for multi-source fusion positioning according to embodiment 1 of the present invention.
In step S200, a combined system initialization is performed.
The process of initializing the integrated navigation system comprises the following steps: the angular velocity of the navigation coordinate system relative to the earth coordinate system caused by the carrier velocity is not considered during attitude updating, and on the premise that the velocity of the navigation coordinate system caused by the carrier velocity is not considered during velocity updating, the attitude updating formula during open-loop initial alignment is as follows:
Figure 581643DEST_PATH_IMAGE001
(6)
the formula for the speed update during the open loop initial alignment is:
Figure 608505DEST_PATH_IMAGE002
(7)
wherein:
Figure 465603DEST_PATH_IMAGE003
representing the attitude matrix of the carrier coordinate system under the navigation coordinate system,
Figure 894310DEST_PATH_IMAGE004
representing the angular velocity of the carrier coordinate system relative to the earth's center inertial coordinate system,
Figure 266910DEST_PATH_IMAGE005
a rotation angular velocity representing a navigation system caused by the rotation of the earth;
Figure 515489DEST_PATH_IMAGE006
representing a projection of the specific force in a navigation coordinate system;
Figure 379540DEST_PATH_IMAGE007
represents the acceleration of gravity;
the error equation of the integrated navigation system is as follows:
Figure 662754DEST_PATH_IMAGE008
(8)
Figure 937746DEST_PATH_IMAGE009
(9)
wherein:
Figure 673621DEST_PATH_IMAGE010
the error of the attitude matrix is represented by,
Figure 341362DEST_PATH_IMAGE011
representing the angular velocity of the navigation system relative to the inertial system,
Figure 479083DEST_PATH_IMAGE012
the null shift error of the gyroscope is represented,
Figure 190556DEST_PATH_IMAGE013
the speed error is indicated in the form of a speed error,
Figure 413727DEST_PATH_IMAGE014
representing the projection of the specific force in the navigation coordinate system,
Figure 681897DEST_PATH_IMAGE015
representing the specific force value measured by the accelerometer,
Figure 674124DEST_PATH_IMAGE016
the accelerometer is constantly biased.
In step S210, extracting visual feature points from the same frame of image data of the same scene to obtain pixel positions of the visual feature points in an image pixel coordinate system, obtaining a first pose through visual inertial navigation fusion, and calculating a visual feature point error and a first reprojection error according to the first pose.
The process of extracting visual characteristic points from the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points in an image pixel coordinate system comprises the following steps: extracting pixel points with obvious differences in image data by adopting an improved FAST corner, wherein the pixel points with obvious differences are pixels with gray value differences larger than a set threshold value from adjacent pixels around;
and (3) improving the rotation invariance of the feature points by adopting a gray centroid algorithm, and determining the position of the ORB feature points in the image data by judging the characteristics of the pixel points through a BRIEF descriptor.
In this step, the condition of the FAST corner is determined as the degree of change in gray level of pixels in the image, and if the difference between the gray level of the pixels at the point and the gray level of the pixels adjacent to the point is greater than a set threshold, the point can be determined as the FAST corner. The distance relation of pixels needs to be defined in the extraction of FAST angular points, and the feature extraction method in the method uses the urban distance to describe pixel points
Figure 307230DEST_PATH_IMAGE096
And
Figure 266965DEST_PATH_IMAGE097
the positional relationship of (a) is as follows:
Figure 276509DEST_PATH_IMAGE098
;(10)
wherein,
Figure 123243DEST_PATH_IMAGE099
and representing the distance between the pixel points p and q.
And extracting gray values of the pixel points p and surrounding pixel points with the urban area distance of 3. And setting the difference between the gray values of a plurality of continuous N pixel points and the gray value of the p point to be more than 25%, determining the point p as a FAST angular point, and setting the threshold value according to the difference of indoor illumination environments.
In the invention, in order to determine the direction information of the angular point p, the information of the centroid included angle of the surrounding image is combined. And a gray centroid algorithm is introduced to ensure that the same angular point can be still accurately identified when the image rotates. The gray value information of the corner point is represented by I (x, y), and the moment of the acquired image block B can be defined as:
Figure 192830DEST_PATH_IMAGE100
;(11)
the centroid of image block B can be represented by the moment of the image:
Figure 365492DEST_PATH_IMAGE101
;(12)
the direction of the FAST corner point can be obtained according to the connecting line of the geometric center and the centroid of the image block as follows:
Figure 178727DEST_PATH_IMAGE102
;(13)
adding BRIEF descriptor information into the collected corner information, describing the pixel size relationship near the corner by using 0 and 1, randomly selecting surrounding pixel points p and q by using a probability distribution method, if the value of p is larger than that of q, taking 1, and otherwise, taking 0. The binary descriptors are used for representing the environment around the corner points, the selecting speed is high, the storage is convenient, the real-time matching can be applied, and meanwhile, the image pyramid algorithm is introduced to improve the characteristic matching influence of image scaling change. Fig. 3 is a schematic diagram of a corner extraction pyramid in embodiment 1 of the present invention; the first piece of collected data is an original image, and is sequentially subjected to a down-sampling process to obtain half of the size of the original image and is placed on the topmost layer. After 8 times of downsampling, when the same corner point is determined as a feature point in each layer, the corner point is determined as a final feature point to be extracted and subsequent matching work is carried out.
Obtaining a first position posture through visual inertial navigation fusion, wherein the process of calculating the visual characteristic point error and the first reprojection error according to the first position posture comprises the following steps:
the visual inertial navigation fusion system is divided into modules such as data preprocessing, estimator initialization, VIO and the like, mainly solves the problems of visual positioning failure and the like caused by illumination change, texture lack and motion blur, performs pure visual bit resource estimation by using a sliding window method, detects feature matching between a latest frame and a frame, and matches feature tracking corresponding to stable and enough parallax in a sliding window. In order to reduce the amount of calculation, a part of the common-view frames with key information is selected to solve the problem of the amount of calculation, the frames with key information are called key frames, and then the number of the key frames contained in a window is controlled through a strategy.
All state vectors within the sliding window are defined:
Figure 879967DEST_PATH_IMAGE103
(14)
wherein,
Figure 120456DEST_PATH_IMAGE018
representing a rotation matrix between the ith frame of inertial navigation and a world coordinate system;
Figure 789203DEST_PATH_IMAGE019
indicating a translation deviation;
Figure 874971DEST_PATH_IMAGE020
representing a velocity vector of inertial navigation in a world coordinate system;
Figure 961876DEST_PATH_IMAGE021
representing an accelerometer bias;
Figure 107686DEST_PATH_IMAGE022
indicating a rotational deviation; n, m and o respectively represent indexes of IMU state vectors, road sign points and road sign lines in the sliding window; n, M and O respectively representing the number of key frames in the sliding window, and the number of road mark points and road mark lines observed by all key frames in the sliding window;
Figure 263730DEST_PATH_IMAGE023
representing the inverse depth at which the kth landmark point is first observed,
Figure 418768DEST_PATH_IMAGE024
an orthogonal expression representing the ith characteristic line;
the sliding window based nonlinear optimization is done by minimizing the residuals of all state quantities within the sliding window, where the objective function is expressed as:
Figure 94600DEST_PATH_IMAGE104
Figure 676891DEST_PATH_IMAGE105
;(15)
wherein,
Figure 320231DEST_PATH_IMAGE027
then the frame is the prior information of the frame which is removed due to long time or less common vision characteristics in the sliding window;
Figure 13380DEST_PATH_IMAGE028
represents the residual;
Figure 809298DEST_PATH_IMAGE029
representing a Jacobian matrix;
Figure 955633DEST_PATH_IMAGE030
is all state vectors within the sliding window;
Figure 633739DEST_PATH_IMAGE031
a penalty item used in an iterative constraint process for a Cauchy robust function;
Figure 130579DEST_PATH_IMAGE032
pre-integration of inertial navigation between the state quantities of the ith frame and the (i + 1) th frame,
Figure 30271DEST_PATH_IMAGE033
representing the error between inertial navigation frames;
Figure 954365DEST_PATH_IMAGE034
representing inertial navigation frame observations;
Figure 57450DEST_PATH_IMAGE035
representing the reprojection error of the visual feature points,
Figure 92402DEST_PATH_IMAGE036
representing a normalized plane
Figure 112179DEST_PATH_IMAGE037
An observed value when the error of the visual feature point re-projection is calculated;
Figure 207174DEST_PATH_IMAGE038
the reprojection error of the visual characteristic line;
Figure 797556DEST_PATH_IMAGE039
representing a normalized plane
Figure 636199DEST_PATH_IMAGE037
Reprojection on visual characteristic lineObserved values during error calculation;
to avoid oscillation caused by too large iteration step, starting from the initial value, by increasing a minute amount
Figure 510483DEST_PATH_IMAGE106
And (4) iteratively updating until the requirements are met:
Figure 776379DEST_PATH_IMAGE107
;(16)
wherein,
Figure 854056DEST_PATH_IMAGE042
is an error vector
Figure 230811DEST_PATH_IMAGE043
About
Figure 962531DEST_PATH_IMAGE044
The jacobian matrix of (a) is,
Figure 399329DEST_PATH_IMAGE045
prior error before and after iteration respectively,
Figure 167565DEST_PATH_IMAGE046
inertial navigation pre-integral errors before and after iteration respectively,
Figure 597278DEST_PATH_IMAGE047
the visual characteristic point reprojection errors before and after iteration are respectively,
Figure 728045DEST_PATH_IMAGE048
the visual characteristic line reprojection errors before and after iteration are respectively.
In step S220, performing inertial navigation and lidar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points in world coordinates, and obtaining a second pose through inertial navigation lidar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain a detailed process of realizing three-dimensional coordinates of point cloud feature points under world coordinates, wherein the detailed process comprises the steps of performing down-sampling on the ground point cloud of each frame by using a voxel grid filtering method and marking the down-sampled ground point cloud as a ground feature point;
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness;
setting a smoothness threshold value, and taking points with smoothness larger than the smoothness threshold value as line characteristic points; points whose smoothness is less than the threshold are regarded as surface feature points.
For the extraction of the ground feature points, geometric constraint is added on the basis of the depth image, and the vertical angle is utilized
Figure 539006DEST_PATH_IMAGE108
And elevation difference
Figure 388013DEST_PATH_IMAGE109
Jointly constrain ground points. Since the lidar is horizontally mounted on a carrier, the ground extraction method has the advantage that it is not strictly required that the ground is in a completely horizontal state. The ground point clouds of each frame are down-sampled to different degrees by using a voxel grid filtering method and are marked as ground characteristic points
Figure 372150DEST_PATH_IMAGE110
And
Figure 809953DEST_PATH_IMAGE111
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness u, and the steps are as follows:
in the same scanning, selecting five points on the left and the right of any point A;
the difference between the X coordinate of each point and the X coordinate of the point A is calculated, and all the differences are summed to obtain
Figure 588554DEST_PATH_IMAGE112
The difference between the Y coordinate of each point and the Y coordinate of the point A is calculated, and all the differences are summed to obtain
Figure 128119DEST_PATH_IMAGE113
The Z coordinate of each point is subtracted from the Z coordinate of the point A, and all the differences are summed to obtain
Figure 650367DEST_PATH_IMAGE114
Calculating smoothness
Figure 208257DEST_PATH_IMAGE115
Five points on the edge of each scanning point do not participate in feature point selection because the condition of calculating smoothness of the front and rear five points is not satisfied. In order to prevent feature points in one frame of data from being excessively aggregated, each scan in each frame of point cloud is divided into six parts from the starting scan to the ending scan, and a smoothness threshold value is set. Line characteristic point selection conditions: and regarding the points with smoothness larger than the threshold value as line characteristic points, if the line characteristic points exist in the five points before and after the points, skipping the points and selecting the points with smaller smoothness. Surface feature point selection conditions: regarding a point with smoothness less than a threshold as a surface feature point, if the surface feature point exists in five points before and after the point, skipping the point and selecting from points with higher smoothness. For each frame of point cloud, marking the points with larger u values as line characteristic points
Figure 157758DEST_PATH_IMAGE116
Marking the points with smaller u value as surface characteristic points
Figure 184620DEST_PATH_IMAGE117
From
Figure 244980DEST_PATH_IMAGE116
In the selection of
Figure 660305DEST_PATH_IMAGE118
The point with the maximum u value
Figure 46287DEST_PATH_IMAGE119
And satisfy
Figure 29287DEST_PATH_IMAGE120
From
Figure 893337DEST_PATH_IMAGE121
In the selection
Figure 425819DEST_PATH_IMAGE122
Minimum point of u value
Figure 717123DEST_PATH_IMAGE123
And satisfy
Figure 452998DEST_PATH_IMAGE124
. Extracting three feature points including ground feature point, non-ground line feature point and non-ground surface feature point for each frame of laser point cloud, and storing each feature point cloud in two parts, i.e. extracting three feature points
Figure 120739DEST_PATH_IMAGE125
And
Figure 507727DEST_PATH_IMAGE126
Figure 969933DEST_PATH_IMAGE127
and
Figure 989841DEST_PATH_IMAGE119
Figure 398957DEST_PATH_IMAGE117
and
Figure 187921DEST_PATH_IMAGE123
. FIG. 4 is a schematic diagram of the screening points in example 1 of the present invention.
The following unstable noise points are eliminated:
the points at the edge of the field of view of the laser radar are deformed, a large curvature phenomenon is generated, and the quality of feature extraction is influenced, so that the points at the edge of the field of view, namely the points with the depth value larger than 60m, are removed.
And removing the point with the included angle between the laser beam and the plane too small, namely the situation that the laser beam and the plane are nearly parallel.
Part of the occluded laser points are removed, since the surface where the occluded points are located may disappear in the next frame when the lidar is moving, as in the case of point B in the figure below. Defining a depth value of point A, B as
Figure 335875DEST_PATH_IMAGE128
Figure 46342DEST_PATH_IMAGE129
The judgment standard of the shielding points is as follows:
Figure 790307DEST_PATH_IMAGE130
obtaining a second attitude through inertial navigation laser radar fusion; the detailed process for calculating the error of the point cloud characteristic point and the second reprojection error according to the second pose comprises the following steps:
laser radar frame composed of extracted all characteristics
Figure 902620DEST_PATH_IMAGE049
Figure 970544DEST_PATH_IMAGE050
(17)
Wherein the edge feature extracted at the ith time
Figure 168307DEST_PATH_IMAGE051
Figure 715963DEST_PATH_IMAGE052
Extracting plane features for the ith moment;
selecting key frames and passing through sliding windowsTo contain a fixed number of sub-key frames
Figure 682782DEST_PATH_IMAGE053
(ii) a Transformation of sub-keyframes under a world coordinate system
Figure 906958DEST_PATH_IMAGE054
Finally merged into a voxel map
Figure 326438DEST_PATH_IMAGE055
The preparation method comprises the following steps of (1) performing;
Figure 677785DEST_PATH_IMAGE056
(18)
wherein,
Figure 499111DEST_PATH_IMAGE057
represented as an edge feature voxel map;
Figure 159768DEST_PATH_IMAGE058
represented as a planar feature voxel map;
Figure 66544DEST_PATH_IMAGE131
Figure 956003DEST_PATH_IMAGE060
is the ith frame edge feature expressed under the world coordinate system;
Figure 897414DEST_PATH_IMAGE061
is the i-1 frame edge characteristic expressed under the world coordinate system;
Figure 728973DEST_PATH_IMAGE062
is the i-n frame edge characteristic expressed under the world coordinate system;
Figure 123045DEST_PATH_IMAGE063
is a world seatMarking the ith frame face characteristic represented under the mark;
Figure 816195DEST_PATH_IMAGE064
is the i-1 frame face characteristic expressed under the world coordinate system;
Figure 346533DEST_PATH_IMAGE065
is the ith-nth frame face characteristic expressed under the world coordinate system;
matching the edge characteristics and the plane characteristics, and establishing a pose solving equation by using the distance relation between the line and the point surface:
Figure 351923DEST_PATH_IMAGE066
(19)
Figure 233291DEST_PATH_IMAGE132
(20)
wherein
Figure 730131DEST_PATH_IMAGE068
Representing the edge coordinates of the k point at the i +1 th moment;
Figure 380556DEST_PATH_IMAGE069
representing the edge coordinates of the u point at the ith moment;
Figure 835808DEST_PATH_IMAGE070
representing the edge coordinates of the point v at the ith moment;
Figure 453740DEST_PATH_IMAGE071
representing the coordinates of a u point surface at the ith moment;
Figure 488692DEST_PATH_IMAGE072
representing the coordinates of a point surface at the ith moment v;
Figure 259202DEST_PATH_IMAGE073
representing the coordinates of a k point surface at the i +1 th moment;
Figure 88617DEST_PATH_IMAGE074
representing the coordinates of a point and a surface at the ith moment w;
the non-linear problem in scan matching is represented as an iteratively solved linear problem:
Figure 193846DEST_PATH_IMAGE075
;(21)
wherein,
Figure 32489DEST_PATH_IMAGE076
for the transformation matrix of the key frame under the world coordinate system, A and b are represented by linearized poses
Figure 126347DEST_PATH_IMAGE077
This results in a representing rotation and b translation.
In step S230, based on the pixel position of the visual feature point in the image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in the world coordinate, calculating a third pose of the laser radar and the camera by using a PNP method, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the PNP algorithm is a calculation method for point-to-point movement from 3D points to 2D points, and the method estimates the pose of a camera through n 3D points and the projection positions of the n 3D points in an image coordinate system. The calculation method of the 3D-2D point pair does not need to require the pair level constraint of the camera in the calculation process, and meanwhile, better camera motion estimation can be still realized under the condition that the number of correct matching of the feature points is small.
In three-dimensional space there are n spatial points P, the coordinates of which are
Figure 657822DEST_PATH_IMAGE078
I =1,2,3,4, …, n, the coordinates projected onto the pixel coordinate system being
Figure 719188DEST_PATH_IMAGE079
(ii) a The camera spatial point and pixel position relationship can be found:
Figure 361522DEST_PATH_IMAGE133
;(22)
wherein,
Figure 841045DEST_PATH_IMAGE081
representing the scale information of the image pixel points, the K matrix is represented as the internal parameter of the camera,
Figure 277842DEST_PATH_IMAGE082
rotation and translation matrices represented as lie groups for the camera;
and (3) establishing an error term equation by a least square method to estimate the pose of the camera:
Figure 95013DEST_PATH_IMAGE083
;(23)
formula (23) is represented by
Figure 275459DEST_PATH_IMAGE084
When taking the minimum value
Figure 609488DEST_PATH_IMAGE085
Of which the optimization variable is a matrix
Figure 217187DEST_PATH_IMAGE086
The method also comprises the step that the constructed minimized re-projection equation adjusts the pose of the camera, so that the re-projection error of the position of the camera reaches the minimum value;
the coordinate p of the characteristic point is transformed into a point under the camera coordinate system
Figure 66194DEST_PATH_IMAGE089
The transformed coordinates are:
Figure 34019DEST_PATH_IMAGE088
;(24)
relative to
Figure 488134DEST_PATH_IMAGE134
The camera projection model of (2) is:
Figure 266734DEST_PATH_IMAGE090
can be unfolded to obtain
Figure 806300DEST_PATH_IMAGE135
;(25)
Wherein,
Figure 577816DEST_PATH_IMAGE092
for the focus of the camera internal reference x-axis,
Figure 886437DEST_PATH_IMAGE093
is the focal length of the camera internal reference y-axis,
Figure 835939DEST_PATH_IMAGE094
is the x-axis optical center of the camera,
Figure 597221DEST_PATH_IMAGE095
respectively, the y-axis optical center of the camera.
In step S230, comparing the first re-projection error, the second re-projection error and the third re-projection error, performing weight ranking on the visual feature point error, the point cloud feature point error and the phase position feature point error, and performing quality grade division after the environmental feature matching.
Through the steps, the feature points extracted by the camera and the laser radar and the position posture result obtained based on the feature point calculation are obtained, the feature point error and the reprojection error are calculated according to the obtained result, the error obtained by each fusion method is compared, the weight ranking is carried out on each feature point by adopting a voting mechanism, the corresponding threshold value is set according to the application scene, the feature points are classified according to the voting result, the information matching quality grade division is carried out on the scene space-time associated information provided by the vision and the radar, and the scene space-time associated information provided by the vision and the radar is divided into three information sets: a quality primary set, a quality secondary set and a quality tertiary set. The quality secondary set comprises a quality primary set, and the quality tertiary set comprises a quality primary set and a quality secondary set. And adopting a voting mechanism strategy, and respectively utilizing association information sets with different quality grades to perform fusion positioning. And dynamically maintaining various different information fusion modes, ensuring that an effective fusion mode exists, and performing reliability joint evaluation and screening to finish the output selection of the effective mode. And selecting the feature points with the top rank to mainly participate in the resolving pose at the next moment, and simultaneously making a decision according to the number of the feature points at the next moment to determine whether to select relatively weak feature points to participate in the resolving at the next moment.
The robustness control method for multi-source fusion positioning, which is provided by the embodiment 1 of the invention, can control the number and quality of feature points participating in calculation of the next frame, ensure that the feature points participate in the calculation with the minimum number, enable the unmanned vehicle system to achieve the highest accuracy, and compared with the previous scheme that all feature points participate in the calculation, the robustness of the system is improved, the environmental adaptability and accuracy of the unmanned vehicle system are improved, the number of errors or points with larger errors participate in the calculation is reduced, and the engineering application of multi-source sensor data fusion is facilitated.
Example 2
Based on the robustness control method for multi-source fusion positioning provided by the embodiment 1 of the invention, the embodiment 2 of the invention also provides a robustness control system for multi-source fusion positioning, and as shown in fig. 5, the robustness control system for multi-source fusion positioning in the embodiment of the invention is a schematic diagram, and the system comprises a first extraction and fusion module, a second extraction and fusion module, a calculation module and a division module;
the first extraction and fusion module is used for extracting visual feature points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual feature points under an image pixel coordinate system, obtaining a first position posture through visual inertial navigation fusion, and calculating a visual feature point error and a first re-projection error according to the first position posture;
the second extraction and fusion module is used for extracting the point cloud characteristic points of the ground point cloud data of the same frame in the same scene by inertial navigation and laser radar to obtain three-dimensional coordinates of the point cloud characteristic points in world coordinates, and obtaining a second position posture by the fusion of the inertial navigation laser radar; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
the calculation module is used for calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the dividing module is used for comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
The system also comprises an initialization module used for initializing the combined system and executing the corresponding first extraction and fusion module, the second extraction and fusion module, the calculation module and the division module after the initialization module is completed.
The process implemented by each module in embodiment 2 of the present invention is the same as the process in the corresponding step in embodiment 1, and is not described in detail here.
The robustness control system for multi-source fusion positioning, which is provided by the embodiment 2 of the invention, correspondingly modularizes the steps of the method in the embodiment 1, can control the number and quality of the feature points participating in the calculation of the next frame, ensures that the minimum feature points participate in the calculation, can enable the unmanned vehicle system to reach the highest accuracy, and compared with the previous scheme that all the feature points participate in the calculation, the robustness control system not only can improve the calculation speed of the unmanned vehicle system, improve the environmental adaptability and accuracy of the system, but also can improve the robustness of the system, reduce the participation of the points with errors or larger errors in the calculation, and is convenient for the engineering application of multi-source sensor data fusion.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include elements inherent in the list. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. In addition, parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of corresponding technical solutions in the prior art, are not described in detail so as to avoid redundant description.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, the scope of the present invention is not limited thereto. Various modifications and alterations will occur to those skilled in the art based on the foregoing description. And are neither required nor exhaustive of all embodiments. On the basis of the technical scheme of the invention, various modifications or changes which can be made by a person skilled in the art without creative efforts are still within the protection scope of the invention.

Claims (10)

1. A robustness control method for multi-source fusion positioning is characterized by comprising the following steps:
extracting visual characteristic points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual characteristic points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual characteristic point error and a first reprojection error according to the first position;
performing inertial navigation and laser radar feature point extraction on ground point cloud data of the same frame of the same scene to obtain three-dimensional coordinates of point cloud feature points under world coordinates, and obtaining a second pose through inertial navigation laser radar fusion; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinates of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
and comparing the first reprojection error, the second reprojection error and the third reprojection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
2. The robustness control method for multi-source fusion localization as recited in claim 1, further comprising before feature point extraction: initializing a combined navigation system;
the process of initializing the integrated navigation system comprises the following steps: the angular velocity of the navigation coordinate system relative to the earth coordinate system caused by the carrier velocity is not considered during attitude updating, and on the premise that the velocity of the navigation coordinate system caused by the carrier velocity is not considered during velocity updating, the attitude updating formula during open-loop initial alignment is as follows:
Figure 770658DEST_PATH_IMAGE001
;(6)
the formula for the speed update during the open loop initial alignment is:
Figure 515629DEST_PATH_IMAGE002
;(7)
wherein:
Figure 575989DEST_PATH_IMAGE003
representing the attitude matrix of the carrier coordinate system under the navigation coordinate system,
Figure 739117DEST_PATH_IMAGE004
representing the angular velocity of the carrier coordinate system relative to the earth's center inertial coordinate system,
Figure 642875DEST_PATH_IMAGE005
a rotation angular velocity representing a navigation system caused by the rotation of the earth;
Figure 891454DEST_PATH_IMAGE006
representing a projection of the specific force in a navigation coordinate system;
Figure 755505DEST_PATH_IMAGE007
represents the acceleration of gravity;
the error equation of the integrated navigation system is as follows:
Figure 38719DEST_PATH_IMAGE008
;(8)
Figure 579290DEST_PATH_IMAGE009
;(9)
wherein:
Figure 315165DEST_PATH_IMAGE010
the error of the attitude matrix is represented by,
Figure 982907DEST_PATH_IMAGE011
representing the angular velocity of the navigation system relative to the inertial system,
Figure 120627DEST_PATH_IMAGE012
the null shift error of the gyroscope is represented,
Figure 832100DEST_PATH_IMAGE013
the speed error is indicated in the form of a speed error,
Figure 55271DEST_PATH_IMAGE014
representing the projection of the specific force in the navigation coordinate system,
Figure 261124DEST_PATH_IMAGE015
representing the specific force value measured by the accelerometer,
Figure 253351DEST_PATH_IMAGE016
the accelerometer is constantly biased.
3. The robustness control method for multi-source fusion localization as claimed in claim 1, wherein the image data feature extraction process comprises:
extracting pixel points with obvious difference in image data by adopting an improved FAST corner, wherein the gray value difference between the pixel points with obvious difference and surrounding adjacent pixels is larger than a set threshold;
and (3) improving the rotation invariance of the feature points by adopting a gray centroid algorithm, and determining the position of the ORB feature points in the image data by judging the characteristics of the pixel points through a BRIEF descriptor.
4. The robust control method for multi-source fusion positioning as claimed in claim 1, wherein the detailed process of the visual inertial navigation fusion comprises:
all state vectors within the sliding window are defined:
Figure 401305DEST_PATH_IMAGE017
;(14)
wherein,
Figure 111772DEST_PATH_IMAGE018
representing a rotation matrix between the ith frame of inertial navigation and a world coordinate system;
Figure 590158DEST_PATH_IMAGE019
representing a translational deviation;
Figure 702470DEST_PATH_IMAGE020
representing a velocity vector of inertial navigation in a world coordinate system;
Figure 758675DEST_PATH_IMAGE021
representing an accelerometer bias;
Figure 956438DEST_PATH_IMAGE022
indicating a rotational deviation; n, m and o respectively represent indexes of IMU state vectors, road sign points and road sign lines in the sliding window; n, M and O respectively represent the number of key frames in the sliding window, and the number of road mark points and road mark lines observed by all key frames in the sliding window;
Figure 504094DEST_PATH_IMAGE023
representing the inverse depth at which the kth landmark point is first observed,
Figure 470913DEST_PATH_IMAGE024
an orthogonal expression representing the ith characteristic line;
the sliding window based nonlinear optimization is done by minimizing the residuals of all state quantities within the sliding window, where the objective function is expressed as:
Figure 960669DEST_PATH_IMAGE025
Figure 176887DEST_PATH_IMAGE026
;(15)
wherein,
Figure 528234DEST_PATH_IMAGE027
then the frame is the prior information of the frame which is removed due to long time or less common vision characteristics in the sliding window;
Figure 349559DEST_PATH_IMAGE028
represents the residual;
Figure 760949DEST_PATH_IMAGE029
express yaA gram ratio matrix;
Figure 916993DEST_PATH_IMAGE030
is all state vectors within the sliding window;
Figure 806452DEST_PATH_IMAGE031
a penalty item used in an iterative constraint process for a Cauchy robust function;
Figure 747863DEST_PATH_IMAGE032
is the inertial navigation pre-integration between the state quantities of the ith frame and the (i + 1) th frame,
Figure 330154DEST_PATH_IMAGE033
representing the error between inertial navigation frames;
Figure 973494DEST_PATH_IMAGE034
representing inertial navigation frame observations;
Figure 666643DEST_PATH_IMAGE035
representing the reprojection error of the visual feature points,
Figure 196982DEST_PATH_IMAGE036
representing a normalized plane
Figure 684595DEST_PATH_IMAGE037
An observed value when the error of the visual feature point re-projection is calculated;
Figure 83740DEST_PATH_IMAGE038
the reprojection error of the visual characteristic line;
Figure 580580DEST_PATH_IMAGE039
representing a normalized plane
Figure 231004DEST_PATH_IMAGE037
An observed value during the calculation of the visual characteristic line reprojection error;
to avoid oscillation caused by the growth of the iteration step, starting from the initial value, by increasing a minute amount
Figure 889519DEST_PATH_IMAGE040
And (4) iteratively updating until the requirements are met:
Figure 241871DEST_PATH_IMAGE041
; (16)
wherein,
Figure 276824DEST_PATH_IMAGE042
is an error vector
Figure 47333DEST_PATH_IMAGE043
About
Figure 876749DEST_PATH_IMAGE044
The jacobian matrix of (a) is,
Figure 981977DEST_PATH_IMAGE045
prior errors before and after the iteration respectively,
Figure 820620DEST_PATH_IMAGE046
respectively inertial navigation pre-integral errors before and after iteration,
Figure 180057DEST_PATH_IMAGE047
the visual characteristic point reprojection errors before and after iteration are respectively,
Figure 445954DEST_PATH_IMAGE048
the visual characteristic line reprojection errors before and after iteration are respectively.
5. The robustness control method for multi-source fusion positioning according to claim 1, wherein the inertial navigation and lidar characteristic point extraction process comprises:
carrying out down-sampling on the ground point cloud of each frame by using a voxel grid filtering method, and marking the down-sampled ground point cloud as a ground characteristic point;
after the dynamic barrier is removed, extracting line characteristic points and surface characteristic points of the rest non-ground point clouds to define smoothness;
setting a smoothness threshold value, and taking points with smoothness larger than the smoothness threshold value as line characteristic points; points whose smoothness is less than the threshold are regarded as surface feature points.
6. The robust control method for multi-source fusion positioning as claimed in claim 5, wherein the inertial navigation lidar fusion process comprises:
laser radar frame composed of extracted all characteristics
Figure 772899DEST_PATH_IMAGE049
Figure 415233DEST_PATH_IMAGE050
(17)
Wherein the edge feature extracted at the ith time
Figure 894756DEST_PATH_IMAGE051
Figure 331553DEST_PATH_IMAGE052
Extracting plane features for the ith moment;
selecting a key frame and including a fixed number of sub-key frames by sliding a window
Figure 693264DEST_PATH_IMAGE053
(ii) a Transformation of sub-keyframes under a world coordinate system
Figure 317451DEST_PATH_IMAGE054
Finally merged into a voxel map
Figure 651480DEST_PATH_IMAGE055
Performing the following steps;
Figure 259179DEST_PATH_IMAGE056
(18)
wherein,
Figure 108186DEST_PATH_IMAGE057
represented as an edge feature voxel map;
Figure 76011DEST_PATH_IMAGE058
represented as a planar feature voxel map;
Figure 530126DEST_PATH_IMAGE059
Figure 308726DEST_PATH_IMAGE060
is the ith frame edge feature expressed under the world coordinate system;
Figure 848292DEST_PATH_IMAGE061
is the i-1 frame edge characteristic expressed under the world coordinate system;
Figure 619808DEST_PATH_IMAGE062
is the i-n frame edge characteristic expressed under the world coordinate system;
Figure 928430DEST_PATH_IMAGE063
is the ith frame face characteristic expressed under the world coordinate system;
Figure 877931DEST_PATH_IMAGE064
is the i-1 frame face characteristic expressed under the world coordinate system;
Figure 639214DEST_PATH_IMAGE065
is the ith-nth frame face characteristic expressed under the world coordinate system;
matching the edge characteristics and the plane characteristics, and establishing a pose solving equation by using the distance relation between the line and the point surface:
Figure 214420DEST_PATH_IMAGE066
;(19)
Figure 377548DEST_PATH_IMAGE067
;(20)
wherein
Figure 999416DEST_PATH_IMAGE068
Representing the edge coordinates of the k point at the i +1 th moment;
Figure 513574DEST_PATH_IMAGE069
representing the edge coordinates of the u point at the ith moment;
Figure 629822DEST_PATH_IMAGE070
representing the edge coordinates of the point v at the ith moment;
Figure 647457DEST_PATH_IMAGE071
representing the coordinates of a u point surface at the ith moment;
Figure 204340DEST_PATH_IMAGE072
representing the coordinates of a point surface at the ith moment v;
Figure 940215DEST_PATH_IMAGE073
representing the coordinates of a k point surface at the i +1 th moment;
Figure 857224DEST_PATH_IMAGE074
representing the coordinates of a point and a surface at the ith moment w;
the non-linear problem in scan matching is represented as an iteratively solved linear problem:
Figure 729365DEST_PATH_IMAGE075
;(21)
wherein,
Figure 457150DEST_PATH_IMAGE076
for the transformation matrix of the key frame under the world coordinate system, A and b are represented by linearized poses
Figure 680321DEST_PATH_IMAGE077
Obtained, a denotes rotation and b denotes translation.
7. The robustness control method for multi-source fusion positioning according to claim 6, wherein the process of extracting the feature points of the camera and the lidar comprises:
in three-dimensional space there are n spatial points P, the coordinates of which are
Figure 135442DEST_PATH_IMAGE078
I =1,2,3,4, …, n, the coordinates projected onto the pixel coordinate system being
Figure 127668DEST_PATH_IMAGE079
(ii) a The camera spatial point and pixel position relationship can be found:
Figure 760775DEST_PATH_IMAGE080
;(22)
wherein,
Figure 205663DEST_PATH_IMAGE081
representing the scale information of the image pixel points, the K matrix is represented as the internal parameter of the camera,
Figure 464475DEST_PATH_IMAGE082
rotation and translation matrices represented as lie groups for the camera;
and (3) establishing an error term equation by a least square method to estimate the pose of the camera:
Figure 576787DEST_PATH_IMAGE083
;(23)
formula (23) is represented by
Figure 380795DEST_PATH_IMAGE084
When taking the minimum value
Figure 578558DEST_PATH_IMAGE085
Of which the optimization variable is a matrix
Figure 922952DEST_PATH_IMAGE086
8. The robustness control method for multi-source fusion positioning according to claim 7, further comprising constructing a minimized re-projection equation to adjust the pose of the camera so that the re-projection error of the camera position reaches a minimum value;
the coordinate p of the characteristic point is transformed into a point under the camera coordinate system
Figure 345231DEST_PATH_IMAGE087
The transformed coordinates are:
Figure 116877DEST_PATH_IMAGE088
; (24)
relative to
Figure 801937DEST_PATH_IMAGE089
The camera projection model of (2) is:
Figure 153284DEST_PATH_IMAGE090
can be unfolded to obtain
Figure 223877DEST_PATH_IMAGE091
;(25)
Wherein,
Figure 369687DEST_PATH_IMAGE092
for the focus of the camera internal reference x-axis,
Figure 276463DEST_PATH_IMAGE093
is the focal length of the camera internal reference y-axis,
Figure 165922DEST_PATH_IMAGE094
is the x-axis optical center of the camera,
Figure 356601DEST_PATH_IMAGE095
respectively, the y-axis optical center of the camera.
9. The robustness control method for multi-source fusion positioning according to claim 1, further comprising controlling the number and quality of feature points participating in the next frame of solution after performing quality level division, and ensuring that the solution is participated in with the least number of feature points, so that the system achieves the highest precision.
10. A robustness control system for multi-source fusion positioning is characterized by comprising a first extraction and fusion module, a second extraction and fusion module, a calculation module and a division module;
the first extraction and fusion module is used for extracting visual feature points aiming at the same frame of image data of the same scene to obtain pixel positions of the visual feature points under an image pixel coordinate system, obtaining a first position through visual inertial navigation fusion, and calculating a visual feature point error and a first re-projection error according to the first position;
the second extraction and fusion module is used for extracting the point cloud characteristic points of the ground point cloud data of the same frame in the same scene by inertial navigation and laser radar to obtain three-dimensional coordinates of the point cloud characteristic points under world coordinates, and obtaining a second position posture by the fusion of the inertial navigation laser radar; calculating a point cloud characteristic point error and a second reprojection error according to the second pose;
the calculation module is used for calculating a third pose of the laser radar and the camera by adopting a PNP method based on the pixel position of the visual feature point in an image pixel coordinate system and the three-dimensional coordinate of the point cloud feature point in world coordinates, and calculating a camera position feature point error and a third reprojection error according to the third pose;
the dividing module is used for comparing the first re-projection error, the second re-projection error and the third re-projection error, respectively carrying out weight ranking on the visual characteristic point error, the point cloud characteristic point error and the phase position characteristic point error, and after the environmental characteristics are matched, carrying out quality grade division.
CN202210562354.XA 2022-05-23 2022-05-23 Robustness control method and system for multi-source fusion positioning Active CN114648584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210562354.XA CN114648584B (en) 2022-05-23 2022-05-23 Robustness control method and system for multi-source fusion positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210562354.XA CN114648584B (en) 2022-05-23 2022-05-23 Robustness control method and system for multi-source fusion positioning

Publications (2)

Publication Number Publication Date
CN114648584A true CN114648584A (en) 2022-06-21
CN114648584B CN114648584B (en) 2022-08-30

Family

ID=81997550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210562354.XA Active CN114648584B (en) 2022-05-23 2022-05-23 Robustness control method and system for multi-source fusion positioning

Country Status (1)

Country Link
CN (1) CN114648584B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442584A (en) * 2022-08-30 2022-12-06 中国传媒大学 Multi-sensor fusion irregular surface dynamic projection method
CN115774265A (en) * 2023-02-15 2023-03-10 江苏集萃清联智控科技有限公司 Two-dimensional code and laser radar fusion positioning method and device for industrial robot
CN117911520A (en) * 2022-10-12 2024-04-19 北京三快在线科技有限公司 Camera internal parameter calibration method and automatic driving equipment
CN118797214A (en) * 2024-09-10 2024-10-18 电子科技大学 Unmanned system flexible collaboration method for intelligent seal detection scene

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109465832A (en) * 2018-12-18 2019-03-15 哈尔滨工业大学(深圳) High-precision vision and the tight fusion and positioning method of IMU and system
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN110428467A (en) * 2019-07-30 2019-11-08 四川大学 A kind of camera, imu and the united robot localization method of laser radar
CN110706279A (en) * 2019-09-27 2020-01-17 清华大学 Global position and pose estimation method based on information fusion of global map and multiple sensors
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112598757A (en) * 2021-03-03 2021-04-02 之江实验室 Multi-sensor time-space calibration method and device
CN113436260A (en) * 2021-06-24 2021-09-24 华中科技大学 Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN113721260A (en) * 2021-08-26 2021-11-30 南京邮电大学 Online combined calibration method for laser radar, binocular camera and inertial navigation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109465832A (en) * 2018-12-18 2019-03-15 哈尔滨工业大学(深圳) High-precision vision and the tight fusion and positioning method of IMU and system
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN110428467A (en) * 2019-07-30 2019-11-08 四川大学 A kind of camera, imu and the united robot localization method of laser radar
CN110706279A (en) * 2019-09-27 2020-01-17 清华大学 Global position and pose estimation method based on information fusion of global map and multiple sensors
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112598757A (en) * 2021-03-03 2021-04-02 之江实验室 Multi-sensor time-space calibration method and device
CN113436260A (en) * 2021-06-24 2021-09-24 华中科技大学 Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN113721260A (en) * 2021-08-26 2021-11-30 南京邮电大学 Online combined calibration method for laser radar, binocular camera and inertial navigation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
THIEN-MINH NGUYEN ET AL.: "VIRAL SLAM: Tightly Coupled Camera-IMU-UWB-Lidar SLAM", 《ARXIV:2105.03296V3》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115442584A (en) * 2022-08-30 2022-12-06 中国传媒大学 Multi-sensor fusion irregular surface dynamic projection method
CN115442584B (en) * 2022-08-30 2023-08-18 中国传媒大学 Multi-sensor fusion type special-shaped surface dynamic projection method
CN117911520A (en) * 2022-10-12 2024-04-19 北京三快在线科技有限公司 Camera internal parameter calibration method and automatic driving equipment
CN115774265A (en) * 2023-02-15 2023-03-10 江苏集萃清联智控科技有限公司 Two-dimensional code and laser radar fusion positioning method and device for industrial robot
CN118797214A (en) * 2024-09-10 2024-10-18 电子科技大学 Unmanned system flexible collaboration method for intelligent seal detection scene

Also Published As

Publication number Publication date
CN114648584B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN114648584B (en) Robustness control method and system for multi-source fusion positioning
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN111583369B (en) Laser SLAM method based on facial line angular point feature extraction
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
CN110223348A (en) Robot scene adaptive bit orientation estimation method based on RGB-D camera
CN111024066A (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN113432600A (en) Robot instant positioning and map construction method and system based on multiple information sources
CN106595659A (en) Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN114419147A (en) Rescue robot intelligent remote human-computer interaction control method and system
WO2024109002A1 (en) Bionic-vision multi-source-information unmanned intelligent sensing platform
CN116619358A (en) Self-adaptive positioning optimization and mapping method for autonomous mining robot
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN111998862A (en) Dense binocular SLAM method based on BNN
CN113674412A (en) Pose fusion optimization-based indoor map construction method and system and storage medium
CN112731503B (en) Pose estimation method and system based on front end tight coupling
CN116989763B (en) Fusion positioning and mapping method for amphibious unmanned system
CN116681733B (en) Near-distance real-time pose tracking method for space non-cooperative target
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN117152228A (en) Self-supervision image depth estimation method based on channel self-attention mechanism
WO2023030062A1 (en) Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
CN116045965A (en) Multi-sensor-integrated environment map construction method
CN116147618A (en) Real-time state sensing method and system suitable for dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant