CN112767482B - Indoor and outdoor positioning method and system with multi-sensor fusion - Google Patents
Indoor and outdoor positioning method and system with multi-sensor fusion Download PDFInfo
- Publication number
- CN112767482B CN112767482B CN202110083601.3A CN202110083601A CN112767482B CN 112767482 B CN112767482 B CN 112767482B CN 202110083601 A CN202110083601 A CN 202110083601A CN 112767482 B CN112767482 B CN 112767482B
- Authority
- CN
- China
- Prior art keywords
- edge
- image
- projection
- pyramid
- current frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000004927 fusion Effects 0.000 title claims abstract description 31
- 239000011159 matrix material Substances 0.000 claims abstract description 53
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000013507 mapping Methods 0.000 claims abstract description 10
- 238000013519 translation Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/43—Determining position using carrier phase measurements, e.g. kinematic positioning; using long or short baseline interferometry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a multi-sensor fusion indoor and outdoor positioning method and a multi-sensor fusion indoor and outdoor positioning system, which comprise the following steps: performing feature acquisition and processing on the acquired RGB image and depth image to generate an edge image pyramid; calculating a DT distance matrix, processing the edge image pyramid, and generating a DT distance matrix pyramid; and performing projection operation on the edge characteristics of the RGB image and the depth image current frame, and determining the projection association range of the current frame and the key frame. Mapping the edge features in the projection association range to a DT distance matrix of the key frame, and determining a reprojection error; solving relative pose information according to the reprojection error; the received GPS data and the solved relative pose information are fused to obtain the positioning information, and the positioning accuracy is high by adopting the method.
Description
Technical Field
The invention relates to the technical field of positioning methods, in particular to an indoor and outdoor positioning method and system with multi-sensor fusion.
Background
The statements herein merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the development and the promotion of the robot technology, the robot positioning technology based on the multi-sensor fusion gradually becomes a research hotspot in the key technical field of the robot. Currently common positioning sensors include GPS, IMU, and cameras. Cameras, in turn, include monocular cameras, binocular cameras, and depth cameras.
At present, the final positioning is mostly realized by fusing the information of vision, GPS and IMU. At present, point features and line features are mostly used for pose estimation in the aspect of vision. However, the characteristic is easily affected by illumination change and has low stability. An edge positioning method is disclosed in the document having application No. 2020105268498, which detects whether a captured image includes a staircase; when the photographed image comprises the stairs, extracting stair edge information according to the photographed image; and carrying out edge positioning according to the stair edge information to obtain edge positioning information. However, the inventor finds that the method uses a single vision sensor, has low stability and cannot perform high-precision positioning in a large-range scene.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides an indoor and outdoor positioning method with multi-sensor fusion, has strong environmental adaptability, and can realize high-precision positioning in a large scene range.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a multi-sensor fusion indoor and outdoor positioning method, including the following steps:
performing feature acquisition and processing on the acquired RGB image and depth image to generate an edge image pyramid;
calculating a DT distance matrix, processing the edge image pyramid, and generating a DT distance matrix pyramid;
and performing projection operation on the edge characteristics of the RGB image and the depth image current frame, and determining the projection association range of the current frame and the key frame.
Mapping the edge features in the projection association range to a DT distance matrix of the key frame, and determining a reprojection error;
solving relative pose information according to the reprojection error;
and fusing the received GPS data and the solved relative pose information to obtain positioning information.
Further, the step of generating the edge image pyramid includes:
carrying out Gaussian pyramid operation on the RGB image and the depth image;
and extracting edge features in the RGB image and the depth image pyramid by adopting a canny algorithm to generate an edge image pyramid.
Further, the specific step of determining the projection association range of the current frame and the key frame is as follows:
initializing the edge characteristics of the current frame based on the edge information and the depth information of the current frame of the image;
and projecting the initialized edge characteristics of the current frame according to the rotation matrix and the translation vector to a set DT distance matrix pyramid of the key frame.
Back projecting the edge characteristics of the key frame to the current frame according to the rotation matrix and the translation vector;
and determining the projection association range of the current frame and the key frame according to the two projection results.
Further, the method for determining the reprojection error comprises the following steps: and mapping the edge features in the projection association range to the DT distance matrix of the key frame, determining projection errors according to the positions of the edge points projected by the current frame on the DT distance matrix, and weighting the projection errors of all the edge points to obtain the final re-projection error.
Further, the pose information solving method comprises the following steps: and taking the rotation matrix R and the translation vector t as variables, performing least squares iteration processing according to the obtained reprojection error, and continuously iterating to obtain final relative pose information.
And further, fusing the received RTK-based GPS data and the solved relative pose information based on Kalman filtering.
Furthermore, a map is built based on the obtained positioning information and the generated point cloud data, and a real-time three-dimensional map is generated.
In a second aspect, embodiments of the present invention provide a multi-sensor fusion indoor and outdoor positioning system, including:
an edge image pyramid generation module: the system is used for carrying out feature acquisition and processing on the acquired RGB image and the deph image to generate an edge image pyramid;
a DT distance matrix pyramid generation module: the distance matrix pyramid is used for calculating a DT distance matrix, processing the edge image pyramid and generating a DT distance matrix pyramid;
a projection module: the method is used for carrying out projection operation on the edge characteristics of the RGB image and the current frame of the deph image, eliminating the non-associated edge and determining the projection association range of the current frame and the key frame.
A reprojection error calculation module: the method comprises the steps of mapping edge features in a projection association range to a DT distance matrix of a key frame to determine a reprojection error;
the relative pose information calculation module: the relative pose information is solved according to the reprojection error;
a fusion module: and the system is used for fusing the received GPS data with the solved relative pose information to obtain positioning information.
In a third aspect, the present invention provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first aspect.
The invention has the beneficial effects that:
1. the method of the invention uses the edge characteristics which are less influenced by illumination change as the information source in the visual odometer, improves the environmental adaptability of the visual odometer calculation method in the low-texture environment, and is simultaneously suitable for indoor and outdoor scene positioning and map building.
2. The method uses the DT distance matrix as the calculation consideration standard of the reprojection error, and has simple calculation and strong robustness.
3. According to the method, the edge features in the projected association range are mapped to the DT distance image of the key frame, the re-projection error is determined, the edge features which are not associated with each other in the adjacent frames are eliminated, the calculated amount is reduced, and the description accuracy of the re-projection error is improved.
4. The method of the invention uses the fusion of GPS data and relative pose information as positioning information, provides a loop correction method based on RTK-GPS technology as global positioning, improves the positioning accuracy of the algorithm in a large-scale scene, and reduces the accumulated error.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, are included to provide a further understanding of the application, and the description of the exemplary embodiments of the application are intended to be illustrative of the application and are not intended to limit the application.
FIG. 1 is a schematic flow chart of example 1 of the present invention;
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As introduced by the background art, the existing indoor and outdoor positioning method uses a single vision sensor, is not high in stability, and cannot perform high-precision positioning in a large-range scene.
In an exemplary embodiment of the present application, as shown in fig. 1, a multi-sensor fusion indoor and outdoor positioning method includes the following steps:
step 1: and performing feature acquisition and processing on the acquired RGB image and depth image to generate an edge image pyramid.
Specifically, the RGB image and the depth image are collected by a depth camera, and scene information in the environment is collected by the depth camera to obtain real-time environment information, so as to form a continuous image sequence. Two types are included in the generated image sequence: the method comprises the following steps of receiving an RGB image and a depth image collected by a depth camera, and carrying out feature collection and processing on the RGB image and the depth image:
step 1.1, carrying out Gaussian pyramid operation on the received RGB image and depth image extracted by the depth camera to generate an image pyramid of the RGB image and the depth image.
And 1.2, extracting edge features in the RGB image and the depth image pyramid by using a canny algorithm to generate an edge image pyramid of the RGB image and the depth image.
The edge features which are less affected by illumination change are used as information sources in the visual odometer, so that the environmental adaptability of the visual odometer calculation method in a low-texture environment is improved, and the method is suitable for indoor and outdoor scene positioning and map building.
Step 2: and calculating a DT distance matrix, and processing the extracted edge image pyramid by using a DT distance matrix algorithm to generate a DT distance matrix pyramid.
And the DT distance matrix is used as a calculation consideration standard of the reprojection error, so that the calculation is simple and the robustness is strong.
And step 3: the method comprises the following specific steps of performing projection operation on edge features of a current frame of an RGB image and a depth image, and determining a projection association range of the current frame and a key frame:
step 3.1: and initializing the edge characteristics of the current frame based on the edge information and the depth information of the current frame of the image.
Wherein p is c Two-dimensional coordinate vector values representing two-dimensional points in the edge information of the current frame, Z representing depth information of P points obtained from a depth image, cx, cy, fx, fy representing camera intrinsic parameters, wherein cx, cy represent offsets of camera optical axes in an image coordinate system, fx, fy represent focal lengths, P represents a distance between the camera optical axes, and c representing two-dimensional points p from the current frame c And the three-dimensional coordinate vector value of the three-dimensional point is calculated by the two-dimensional coordinate vector value and the depth information of the two-dimensional coordinate vector value.
Step 3.2: and projecting the initialized edge characteristics (three dimensions) of the current frame according to the rotation matrix R and the translation vector t, and projecting the edge characteristics to a set DT distance matrix pyramid of the key frame.
At the initial first projection, both the rotation matrix R and the translational vector t are set to 0. In the following projection, R and t are set as the relative rotational poses of the frame preceding the current frame.
P' c =R·P c +t
Wherein, P c ' three-dimensional coordinate vector value, p, representing three-dimensional point after pose conversion c ' two-dimensional coordinate vector value representing pixel point of current frame projected on key frame, X, Y represents P c Three-dimensional coordinate values of'.
Step 3.3: and simultaneously carrying out back projection according to the rotation matrix R and the translation vector t, and carrying out back projection on the edge characteristics on the key frame to the current frame according to the rotation matrix R and the translation vector t.
P' k =R -1 (P k -t)
Wherein p is k Two-dimensional coordinate vector values representing two-dimensional points in key frame edge information, Z representing p obtained from depth image k Depth information of a point, P k Representing two-dimensional points p from a key frame k And the three-dimensional coordinate vector value, P, of the three-dimensional point calculated by the two-dimensional coordinate vector value and the depth information thereof k ' three-dimensional coordinate vector value, p, representing three-dimensional point after pose conversion k ' represents the two-dimensional coordinate vector value of the pixel point of the key frame projected on the current frame.
Step 3.4: and determining the projection association range of the current frame and the key frame according to the results of the two projections.
In this embodiment, the key frame may be determined by using an existing key frame determination method, and a specific method thereof is not described in detail herein.
And 4, step 4: mapping the edge features in the projection association range to a DT distance matrix of the key frame, and determining a reprojection error;
specifically, the edge features in the projection association range are mapped to the DT distance matrix of the keyframe, projection errors are determined according to the positions of the edge points that have been projected on the DT distance matrix, and the projection errors of all the edge points are weighted to obtain the final re-projection error.
Where r represents the reprojection error, ξ, calculated for a single pixel kc Representing a rotational-translation matrix consisting of R and t, S representing an edge having a significant value in the current frameAnd (4) collecting.
And 5: solving relative pose information according to the reprojection error;
specifically, the rotation matrix R and the translation vector t are used as variables, least squares iteration processing is carried out according to the obtained reprojection error, and the final relative pose xi is obtained through continuous iteration * 。
ξ * =arg min∑r 2
Through the steps 1 to 5, the EDGE-V0-based relative pose information is realized.
Step 6: and fusing the received GPS data and the solved pose information for positioning correction to obtain positioning information.
Specifically, GPS data based on RTK is received through an ROS serial port instruction, the data are fused by a data discrimination fusion strategy, when the data quality of the GPS is high, the weight coefficient corresponding to the GPS data is improved in a self-adaptive mode, the positioning data of the GPS and the relative pose information obtained in the step 5 are fused based on Kalman filtering, and positioning information based on multi-sensor fusion is obtained.
When the data quality signal of the GPS is general, the weight coefficient of the GPS data is reduced in a self-adaptive mode, the positioning data of the GPS and the relative pose information obtained in the step 5 are fused on the basis of Kalman filtering, and the positioning information based on multi-sensor fusion is obtained.
(x,y,z)=tranform(lon,lat,alt)
Δt GPS =(Δx,Δy,Δz)
t Fusion of =EKF(Δt GPS ,t VO )
T Finally, the product is processed =(R,t Fusion )
Wherein lon represents precision information, lat represents dimension information, alt represents height information, and delta t represents GPS Representing relative displacement changes based on GPS data, t VO Representing the displacement transformation calculated by visual VO, (x, y, Z) representing the coordinates in the converted ecef coordinate system obtained from GPS data, (Δ x, Δ y, Δ Z) being the difference between the GPS data converted to relative coordinates and the absolute coordinates in the ecef coordinate system, t Fusion Presentation blendA resultant translation vector.
When the signal quality is poor, the GPS data is not introduced, and the fusion is not carried out.
The data quality signal judgment standard of the GPS can be set according to actual needs, and will not be described in detail herein.
The data discrimination fusion strategy improves the precision and robustness of multi-sensor fusion positioning, and reduces the influence of GPS data signal instability on the algorithm.
According to the method, GPS data and relative pose information are fused to serve as positioning information, an RTK-GPS technology is used as a loop correction method of global positioning, positioning accuracy of the algorithm in a large-range scene is improved, and accumulated errors are reduced.
And 7: and (4) establishing a map based on the positioning information and the generated point cloud data to generate a real-time three-dimensional map.
In this embodiment, the steps not described in detail may be performed by a conventional method.
Example 2:
the embodiment discloses an indoor outer positioning system of multisensor fusion includes:
an edge image pyramid generation module: the system is used for carrying out feature acquisition and processing on the acquired RGB image and the deph image to generate an edge image pyramid;
DT distance matrix pyramid generation module: the distance matrix pyramid is used for calculating a DT distance matrix, processing the edge image pyramid and generating a DT distance matrix pyramid;
a projection module: the method is used for carrying out projection operation on the edge characteristics of the RGB image and the current frame of the deph image, eliminating the non-associated edge and determining the projection association range of the current frame and the key frame.
A reprojection error calculation module: the method comprises the steps of mapping edge features in a projection association range to a DT distance matrix of a key frame to determine a reprojection error;
the relative pose information calculation module: the relative pose information is solved according to the reprojection error;
a fusion module: and the system is used for fusing the received GPS data with the solved relative pose information to obtain positioning information.
Example 3:
the embodiment provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the multi-sensor fusion indoor and outdoor positioning method described in embodiment 1.
Example 4:
the present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the multi-sensor fusion indoor and outdoor positioning method described in embodiment 1.
The steps involved in the above embodiments 2-4 correspond to the first embodiment of the method, and the detailed description thereof can be found in the relevant description of the first embodiment. The term "computer-readable storage medium" should be taken to include a single medium or multiple media containing one or more sets of instructions; it should also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor and that cause the processor to perform any of the methods of the present invention.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means for execution by the computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (8)
1. A multi-sensor fusion indoor and outdoor positioning method is characterized by comprising the following steps:
performing feature acquisition and processing on the acquired RGB image and depth image to generate an edge image pyramid;
the generation step of the edge image pyramid comprises the following steps:
carrying out Gaussian pyramid operation on the RGB image and the depth image;
extracting edge features in the RGB image and the depth image pyramid by adopting a canny algorithm to generate an edge image pyramid;
calculating a DT distance matrix, processing the edge image pyramid and generating a DT distance matrix pyramid;
performing projection operation on the edge characteristics of the RGB image and the depth image of the current frame, and determining the projection association range of the current frame and the key frame;
the specific steps for determining the projection association range of the current frame and the key frame are as follows:
initializing the edge characteristics of the current frame based on the edge information and the depth information of the current frame of the image;
projecting the initialized edge characteristics of the current frame according to the rotation matrix and the translation vector, and projecting the edge characteristics to a set DT distance matrix pyramid of the key frame;
back projecting the edge characteristics of the key frame to the current frame according to the rotation matrix and the translation vector;
determining the projection association range of the current frame and the key frame according to the two projection results;
mapping the edge features in the projection association range to a DT distance matrix of the key frame, and determining a reprojection error;
solving relative pose information according to the reprojection error;
and fusing the received GPS data and the solved relative pose information to obtain positioning information.
2. The multi-sensor fusion indoor and outdoor positioning method as claimed in claim 1, wherein the reprojection error is determined by: and mapping the edge features in the projection association range to the DT distance matrix of the key frame, determining projection errors according to the positions of the edge points projected by the current frame on the DT distance matrix, and weighting the projection errors of all the edge points to obtain the final re-projection error.
3. The multi-sensor fusion indoor and outdoor positioning method of claim 1, wherein the pose information solving method comprises the following steps: and taking the rotation matrix R and the translation vector t as variables, performing least squares iteration processing according to the obtained reprojection error, and continuously iterating to obtain final relative pose information.
4. The multi-sensor fusion indoor-outdoor positioning method of claim 1, characterized in that the received RTK-based GPS data and the solved relative pose information are fused based on Kalman filtering.
5. The multi-sensor fusion indoor and outdoor positioning method of claim 1, wherein a real-time three-dimensional map is generated by mapping based on the obtained positioning information and the generated point cloud data.
6. A multi-sensor fused indoor and outdoor positioning system, comprising:
an edge image pyramid generation module: the system is used for carrying out feature acquisition and processing on the acquired RGB image and the deph image to generate an edge image pyramid;
the generation step of the edge image pyramid comprises the following steps:
carrying out Gaussian pyramid operation on the RGB image and the depth image;
extracting edge features in the RGB image and the depth image pyramid by adopting a canny algorithm to generate an edge image pyramid;
DT distance matrix pyramid generation module: the image processing device is used for calculating a DT distance matrix, processing the edge image pyramid and generating a DT distance matrix pyramid;
a projection module: the method comprises the steps of performing projection operation on edge features of a current frame of an RGB image and a deph image, eliminating non-associated edges, and determining projection association ranges of the current frame and a key frame;
the specific steps for determining the projection association range of the current frame and the key frame are as follows:
initializing the edge characteristics of the current frame based on the edge information and the depth information of the current frame of the image;
projecting the initialized edge characteristics of the current frame according to the rotation matrix and the translation vector, and projecting the edge characteristics to a set DT distance matrix pyramid of the key frame;
back projecting the edge characteristics of the key frame to the current frame according to the rotation matrix and the translation vector;
determining the projection association range of the current frame and the key frame according to the two projection results;
a reprojection error calculation module: the method comprises the steps of mapping edge features in a projection association range to a DT distance matrix of a key frame to determine a reprojection error;
the relative pose information calculation module: the relative pose information is solved according to the reprojection error;
a fusion module: and the positioning system is used for fusing the received GPS data with the solved relative pose information to obtain positioning information.
7. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the multi-sensor fused indoor and outdoor positioning method steps of any one of claims 1 to 5.
8. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the multi-sensor fused indoor and outdoor positioning method steps of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110083601.3A CN112767482B (en) | 2021-01-21 | 2021-01-21 | Indoor and outdoor positioning method and system with multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110083601.3A CN112767482B (en) | 2021-01-21 | 2021-01-21 | Indoor and outdoor positioning method and system with multi-sensor fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112767482A CN112767482A (en) | 2021-05-07 |
CN112767482B true CN112767482B (en) | 2022-09-02 |
Family
ID=75703576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110083601.3A Active CN112767482B (en) | 2021-01-21 | 2021-01-21 | Indoor and outdoor positioning method and system with multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767482B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115908730A (en) * | 2022-11-11 | 2023-04-04 | 南京理工大学 | Edge-based three-dimensional scene reconstruction system method for remote control end under low communication bandwidth |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111060115A (en) * | 2019-11-29 | 2020-04-24 | 中国科学院计算技术研究所 | Visual SLAM method and system based on image edge features |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018049581A1 (en) * | 2016-09-14 | 2018-03-22 | 浙江大学 | Method for simultaneous localization and mapping |
EP3474230B1 (en) * | 2017-10-18 | 2020-07-22 | Tata Consultancy Services Limited | Systems and methods for edge points based monocular visual slam |
US10928207B2 (en) * | 2018-03-02 | 2021-02-23 | DeepMap Inc. | Camera based localization for autonomous vehicles |
CN109993113B (en) * | 2019-03-29 | 2023-05-02 | 东北大学 | Pose estimation method based on RGB-D and IMU information fusion |
CN110706248B (en) * | 2019-08-20 | 2024-03-12 | 广东工业大学 | Visual perception mapping method based on SLAM and mobile robot |
CN111882613B (en) * | 2020-07-24 | 2024-06-14 | 中国科学院上海微系统与信息技术研究所 | Visual odometer method, device, storage medium and equipment based on edge semantics |
-
2021
- 2021-01-21 CN CN202110083601.3A patent/CN112767482B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111060115A (en) * | 2019-11-29 | 2020-04-24 | 中国科学院计算技术研究所 | Visual SLAM method and system based on image edge features |
Also Published As
Publication number | Publication date |
---|---|
CN112767482A (en) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111561923B (en) | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
CN112634451B (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
WO2021233029A1 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN111210463B (en) | Virtual wide-view visual odometer method and system based on feature point auxiliary matching | |
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN113706626B (en) | Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction | |
CN109345588A (en) | A kind of six-degree-of-freedom posture estimation method based on Tag | |
CN112556719B (en) | Visual inertial odometer implementation method based on CNN-EKF | |
CN108052103B (en) | Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer | |
CN111462207A (en) | RGB-D simultaneous positioning and map creation method integrating direct method and feature method | |
CN113108771B (en) | Movement pose estimation method based on closed-loop direct sparse visual odometer | |
CN113658337B (en) | Multi-mode odometer method based on rut lines | |
CN113362247A (en) | Semantic live-action three-dimensional reconstruction method and system of laser fusion multi-view camera | |
CN205426175U (en) | Fuse on -vehicle multisensor's SLAM device | |
CN112083403B (en) | Positioning tracking error correction method and system for virtual scene | |
CN116222543B (en) | Multi-sensor fusion map construction method and system for robot environment perception | |
CN116619358A (en) | Self-adaptive positioning optimization and mapping method for autonomous mining robot | |
CN114964276B (en) | Dynamic vision SLAM method integrating inertial navigation | |
CN110827353A (en) | Robot positioning method based on monocular camera assistance | |
CN114494150A (en) | Design method of monocular vision odometer based on semi-direct method | |
CN112731503A (en) | Pose estimation method and system based on front-end tight coupling | |
CN112767482B (en) | Indoor and outdoor positioning method and system with multi-sensor fusion | |
CN112767481B (en) | High-precision positioning and mapping method based on visual edge features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |