CN114842224A - Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map - Google Patents
Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map Download PDFInfo
- Publication number
- CN114842224A CN114842224A CN202210418712.XA CN202210418712A CN114842224A CN 114842224 A CN114842224 A CN 114842224A CN 202210418712 A CN202210418712 A CN 202210418712A CN 114842224 A CN114842224 A CN 114842224A
- Authority
- CN
- China
- Prior art keywords
- aerial vehicle
- unmanned aerial
- matching
- image
- base map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005259 measurement Methods 0.000 claims abstract description 49
- 230000000007 visual effect Effects 0.000 claims abstract description 35
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 18
- 238000001914 filtration Methods 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 28
- 238000000034 method Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 5
- 125000004432 carbon atom Chemical group C* 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 2
- 230000001133 acceleration Effects 0.000 description 15
- 238000012937 correction Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000014616 translation Effects 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a monocular unmanned aerial vehicle absolute visual matching positioning scheme based on a geographic base map, which comprises the following steps: constructing a geographical base map of an unmanned aerial vehicle flight area and extracting characteristics to obtain a template image; acquiring a picture of an onboard camera and extracting characteristics to obtain an image of a target to be detected; carrying out motion constraint on the unmanned aerial vehicle based on an inertial measurement unit, and matching the template image with a target image to be detected to obtain a matching area; according to the output of the airborne camera and the output of the inertia measurement unit, a visual inertia odometer is constructed and the local pose of the unmanned aerial vehicle is obtained; and performing state joint estimation on the matching area and the local pose of the unmanned aerial vehicle by using an extended Kalman filtering algorithm to obtain the real-time pose of the unmanned aerial vehicle. By using the invention, the precise positioning of the unmanned aerial vehicle in a complex environment can be realized. The monocular unmanned aerial vehicle absolute visual matching positioning scheme based on the geographic base map can be widely applied to the field of unmanned aerial vehicle visual positioning.
Description
Technical Field
The invention relates to the field of unmanned aerial vehicle visual positioning, in particular to a monocular unmanned aerial vehicle absolute visual matching positioning scheme based on a geographic base map.
Background
The unmanned aerial vehicle has the characteristics of low manufacturing cost, long endurance time, good concealment, strong vitality, no casualties, simplicity in taking off and landing, convenience in operation, flexibility, maneuverability and the like, is suitable for executing heavier tasks in complex and dangerous environments, and therefore has extremely wide application prospects in the military and civil fields. Along with the unmanned aerial vehicle application area constantly increases, also be more and more strict to unmanned aerial vehicle's performance requirement, especially unmanned aerial vehicle's location performance. For an unmanned aerial vehicle to execute tasks in a changeable environment, the current positioning and navigation scheme is difficult to meet the requirements. At present, unmanned aerial vehicle positioning and navigation mainly depend on position information provided by a global positioning scheme GNSS, but GNSS signals are weak and are easy to interfere. Sheltering from serious outdoor environment and indoor environment, GNSS can't normally work, can not provide stable speed and positional information, will directly lead to the unable normal flight of unmanned aerial vehicle. This has spurred the development of new methods to supplement or replace satellite navigation in GNSS signal rejection environments.
Aiming at the problem of unmanned aerial vehicle positioning under the condition that a satellite positioning scheme is rejected, the existing method mainly utilizes a visual image and an airborne reference image to carry out scene matching, and obtains absolute position information of the unmanned aerial vehicle. However, the reference image and the real-time information acquired by the unmanned aerial vehicle have height, time and visual angle differences, and the corresponding relation of the features in the image is damaged due to the differences, so that the existing method is difficult to complete the accurate positioning of the unmanned aerial vehicle in a real complex environment, and a high-reliability matching method needs to be researched to complete the accurate positioning of the unmanned aerial vehicle in the real complex environment.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a monocular unmanned aerial vehicle absolute visual matching positioning scheme based on a geographic base map, and the unmanned aerial vehicle can be accurately positioned in a complex environment.
A monocular unmanned aerial vehicle absolute vision matching positioning scheme based on a geographic base map comprises the following steps:
constructing a geographical base map of an unmanned aerial vehicle flight area and extracting characteristics to obtain a template image;
acquiring a picture of an onboard camera and extracting characteristics to obtain an image of a target to be detected;
performing motion constraint on the unmanned aerial vehicle based on the inertial measurement unit, and matching the template image with the target image to be detected to obtain a matching area;
according to the output of the airborne camera and the output of the inertia measurement unit, a visual inertia odometer is constructed and the local pose of the unmanned aerial vehicle is obtained;
and performing state joint estimation on the matching area and the local pose of the unmanned aerial vehicle by using an extended Kalman filtering algorithm to obtain the real-time pose of the unmanned aerial vehicle.
Further, the step of constructing a geographical base map of the flight area of the unmanned aerial vehicle and extracting features to obtain a template image specifically comprises:
selecting a downloading source according to the longitude and latitude coordinates, the image zooming level and the image style of the unmanned aerial vehicle flight area to obtain an image tile;
unifying and fusing image tile coordinates to obtain a coordinate scheme;
writing the coordinate scheme into world coordinates of four corner points of the template to be processed to obtain a geographic base map;
and extracting the feature points of the geographic base map, determining the directions of the feature points, and constructing feature point description to obtain a template image.
Further, the step of carrying out motion constraint on the unmanned aerial vehicle based on the inertial measurement unit, matching the template image with the target image to be detected to obtain a matching area specifically comprises:
estimating the motion state of the unmanned aerial vehicle by using an error state Kalman filtering algorithm based on an inertial measurement unit;
establishing a constrained target image to be detected according to the motion state of the unmanned aerial vehicle;
and matching the constrained target image to be detected with the template image to obtain a matching area.
Further, the step of estimating the motion state of the unmanned aerial vehicle by using an error state kalman filter algorithm based on the inertial measurement unit specifically includes:
performing integral updating on the nominal state of the unmanned aerial vehicle by using an inertia measurement unit;
updating time and measurement of the error state of the unmanned aerial vehicle by using an error state Kalman filtering algorithm;
and combining the nominal state and the error state to obtain the motion state of the unmanned aerial vehicle.
Further, the step of matching the constrained target image to be detected with the template image to obtain a matching region specifically includes:
calculating the Euclidean distance between any two points of the constrained target image to be detected and the template image by adopting the Euclidean distance;
eliminating error matching points according to the Euclidean distance and the data structure of the kd tree to obtain a homonymy point set;
and constructing a matching area according to the same-name point set.
Further, the extended kalman filter algorithm equation is expressed as follows:
in the above formula, the first and second carbon atoms are,is an error covariance matrix, Q k Is the covariance matrix of the process noise, T k,k-1 Is the state transition matrix, τ, of the drone from k to k at key frame k-1 k,k-1 Is T k,k-1 The companion matrix of (a) is,is an estimate of the initial pose of the object,is an error covariance matrix, R, of the second order approximation of the unmanned aerial vehicle at key frame k k Is a covariance matrix of the measured noise, K k Is Kalman gain, ln (·) ∨ And exp (· ^) is the operator of SE (3).
Further, the method also comprises the step of correcting the output of the onboard camera and the output of the inertial measurement unit.
The method and the system have the beneficial effects that: according to the invention, based on the geographic base map and the airborne camera, the image of the airborne camera under the flight area of the unmanned aerial vehicle is matched with the geographic base map to obtain a matching area, so that the positioning and map measurement updating of the unmanned aerial vehicle are realized, meanwhile, the local pose of the unmanned aerial vehicle is estimated through the visual inertial odometer, the state joint estimation is carried out on the matching area and the local pose of the unmanned aerial vehicle, the real-time pose of the unmanned aerial vehicle is obtained, and the positioning precision is improved.
Drawings
FIG. 1 is a flowchart illustrating the steps of an absolute visual matching positioning scheme for a monocular unmanned aerial vehicle based on a geographical base map according to the present invention;
FIG. 2 is a flowchart illustrating the geographic registration of template images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of image tile coordinate transformation according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a camera coordinate transformation according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of error state Kalman filtering in accordance with an embodiment of the present invention;
FIG. 6 is a schematic flow chart of matching regions based on a geographic base map in accordance with an embodiment of the present invention;
FIG. 7 is a flow diagram illustrating joint state estimation according to an embodiment of the present invention;
fig. 8 is a block diagram of a monocular unmanned aerial vehicle absolute visual matching positioning system based on a geographic base map according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. For the step numbers in the following embodiments, they are set for convenience of illustration only, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
Referring to fig. 1, the invention provides a monocular unmanned aerial vehicle absolute visual matching positioning scheme based on a geographic base map, which comprises the following steps:
s1, referring to FIG. 2, constructing a geographical base map of the flight area of the unmanned aerial vehicle and extracting features to obtain a template image;
the unmanned aerial vehicle comprises one or more of an unmanned helicopter, a ducted unmanned aerial vehicle and a rotor unmanned aerial vehicle; when the same unmanned aerial vehicle visual matching positioning scheme and the same sensor are combined, the positioning performance is consistent on different unmanned aerial vehicle devices.
S1.1, selecting a downloading source according to longitude and latitude coordinates, an image zooming level and an image style of a flight area of the unmanned aerial vehicle to obtain an image tile; the template database is pre-established without consuming a large amount of manpower and material resources, and is acquired and updated in real time by starting a source database (Google image and ArcGIS TM);
specifically, the obtained image tiles are Google image tiles which need a zoom level and are obtained from Google images, and the number of image tiles is increased by multiple levels as the zoom level is increased and the download area is increased.
S1.2, unifying and fusing image tile coordinates to obtain a coordinate scheme;
specifically, since the downloaded image tiles are body coordinates, the tile coordinates need to be unified to the same coordinate system for fusion of the image tiles, and a desired coordinate scheme is selected through a series of suitable coordinate transformations.
Referring to fig. 3, the specific calculation operation of the coordinate transformation is as follows:
1) longitude and latitude coordinates (long, lat) to tile coordinates (tileX, tileY):
in the above equation, l is the zoom level of the tile.
2) Longitude and latitude coordinates (long, lat) to pixel coordinates (pixelX, pixelY):
in the above equation, l is the zoom level of the tile.
3) Pixel coordinates (pixelX, pixelY) of a tile (tile ) are translated into longitude and latitude coordinates (long, lat):
in the above equation, l is the zoom level of the tile.
S1.3, writing the coordinate scheme into world coordinates of four corner points of the template to be processed to obtain a geographic base map, and conveniently and quickly retrieving according to pixel values to complete matching and positioning after matching is completed;
s1.4, extracting feature points of the geographic base map, determining the directions of the feature points, and constructing feature point description to obtain a template image; by utilizing the characteristic invariance characteristic of the SIFT method, the rapid retrieval and matching of the unmanned aerial vehicle autonomous visual navigation key ground objects based on the geographic base map are realized.
S1.4.1, extracting the feature points of the geographic base map;
specifically, firstly, a Gaussian scale space is constructed, and Gaussian blurred images with different scales are generated; secondly, performing reduction and adoption on the Gaussian blur image to obtain a series of images with continuously reduced sizes; and finally, carrying out DOG space extreme value detection and removing part of edge response points.
The Gaussian kernel is the only kernel capable of generating the multi-scale space, in the input image model, the scale is continuously subjected to parameter transformation through a Gaussian fuzzy function, and finally the multi-scale space sequence is obtained, wherein the specific calculation operation is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
in the above formula, L (x, y, σ) is a spatial function of a certain scale in an image, G (x, y, σ) is a gaussian function of variable parameters, I (x, y) is an original input image, σ is a scale space factor, and (x, y) is a feature point coordinate.
Wherein, the smaller the sigma is, the clearer the local point of reaction is; conversely, the larger the σ is, the more blurred the image is, and the more details of the image cannot be reflected.
S1.4.2, determining the direction of the characteristic points;
specifically, in order to realize image rotation invariance, the direction of the feature point needs to be assigned.
Firstly, determining a direction parameter of a feature point neighborhood pixel by utilizing the gradient of the feature point neighborhood pixel; secondly, solving the stable direction of the local structure of the feature point by utilizing the gradient histogram of the image; finally, for the detected feature point, we can obtain the scale value of the feature point. Determining this parameter results in a gaussian image at this scale.
L(x,y)=G(x,y,σ)*I(x,y);
In the above formula, L (x, y) is a spatial function of a certain scale in an image, G (x, y, σ) is a gaussian function of variable parameters, I (x, y) is an original input image, σ is a scale space factor, and (x, y) is a feature point coordinate.
The direction is given to the extreme point through the gradient of each extreme point, the gradient amplitude is equal to the sum of the square of the pixel value difference of the upper point and the lower point and the square of the pixel value difference of the left point and the right point, and the gradient direction is the quotient of the pixel value difference of the upper point and the lower point and the pixel value difference of the left point and the right point; namely:
s1.4.3, constructing the characteristic point description.
Specifically, for each feature point, there are three pieces of information: location, scale, and orientation. And integrating the characteristic points and the directions of the characteristic points to establish a descriptor for each characteristic point, so that the descriptor is not changed along with various changes, such as illumination change, view angle change and the like. And the descriptors should be highly unique in order to increase the probability of a correct match of feature points.
In order to ensure that the feature vector has rotation invariance, the position and the direction of the image gradient in the neighborhood near the feature point need to be rotated by a direction angle theta by taking the feature point as a center, namely, the x axis of the original image is rotated to the same direction as the main direction.
S1.4.4, obtaining the template image according to the characteristic point description.
S2, acquiring a picture of the onboard camera and extracting features to obtain an image of the target to be detected;
in particular, the feature extraction performed here on the onboard camera photograph is the same as step S1.4.
S3, carrying out motion constraint on the unmanned aerial vehicle based on the inertial measurement unit, and matching the template image with the target image to be detected to obtain a matching area;
s3.1, referring to FIG. 5, estimating the motion state of the unmanned aerial vehicle by using an error state Kalman filtering algorithm based on an inertial measurement unit;
s3.1.1, carrying out integral updating on the nominal state of the unmanned aerial vehicle by using an inertia measurement unit;
specifically, the nominal state integral update model is as follows:
in the above formula, p is the position, v is the velocity, a m Is added withMeasurement of velocity a b Is the acceleration offset, g is the gravitational acceleration, q is the rotational quaternion, ω m Is an angular velocity measurement, ω b Is the angular velocity bias and R is the rotation matrix generated based on q.
Where R ═ R { q }, denotes the rotation from the IMU sensor body coordinate system to the inertial system.
It should be noted that the above equation does not consider noise at all, and the derivation of the equation is obvious, and only on the basis of the motion equation of the inertial measurement unit, it is necessary to assume that all noise terms are 0.
S3.1.2, updating time and measurement of the error state of the unmanned aerial vehicle by using an error state Kalman filtering algorithm;
specifically, the error state is updated over time, the process aiming to determine the linear dynamics of the error state, each time step is performed; for each state equation, solving the error state and simplifying all second order fractional error state time update equations is as follows:
in the above formula, δ p is the position increment, δ v is the velocity increment, R is the rotation matrix, a m Is an acceleration measurement, a b Is the acceleration offset, δ θ is the angular increment, δ a b Is the acceleration increment, δ g is the acceleration increment, a n Is the acceleration measurement noise, ω m Is an angular velocity measurement, ω b Is the angular velocity offset, δ ω b Is the angular velocity increment, ω n Is angular velocity measurement noise.
And (4) measuring and updating the error state, wherein the error state is updated along with time only when the measured value changes in the measurement updating process. Since the inertial measurement unit has a lot of noise in its data, it is generally necessary to correct the state with more sensor information.
In general, the basic form of the measurement equation is as follows:
y=h(X t )+v;
in the above formula, h (-) is a measurement function, which is related to the onboard camera and the inertial measurement unit. V is white Gaussian noise, v.about.N0, V.
The error state measurement update equation is as follows:
in the above formula, K Kalman gain, P is the error covariance matrix, H is the observation equation Jacobian matrix, V is the observation covariance matrix,is the state increment, and is a mathematical relationship, table approximation and trend.
S3.1.3, combining the nominal state and the error state to obtain an actual state, namely the motion state of the unmanned aerial vehicle;
the nominal state refers to a main trend of the motion state of the unmanned aerial vehicle, the actual state refers to the actual situation of the motion state of the unmanned aerial vehicle, and the error state refers to a difference value between the actual state and the nominal state.
S3.1.4, merging the nominal state with the error state, and resetting the error state.
In particular, state merging is the merging of the error state with the nominal state, i.e. correcting the drift of the nominal state over time
In the above equation, x is the state vector, δ x is the state increment, p is the position vector, δ p is the position increment, v is the velocity vector, δ v is the velocity increment, q is the quaternion, δ θ is the angle increment, a b Is the acceleration bias, δ a b Is the acceleration offset increment, ω b Is the angular velocity offset, δ ω b Is the angular velocity offset increment, g is the gravitational acceleration, δ g is the gravitational acceleration variation,is a matrix addition, and the matrix addition,is a quaternion multiplication.
Wherein the error state mean is reset after adding the error to the nominal state. This is particularly relevant for the orientation part, since new orientation errors will be represented locally with respect to the orientation frame of the new semantic state. In order for the error state kalman filter update to be complete, the covariance of the error needs to be updated according to this modification.
S3.2, establishing a constrained target image to be detected according to the motion state of the unmanned aerial vehicle; the target image to be detected is subjected to relatively clear area limitation, so that the matching efficiency is improved on one hand, and the tolerance capability of a positioning scheme is enhanced on the other hand;
and S3.3, referring to FIG. 6, matching the constrained target image to be detected with the template image to obtain a matching area.
S3.3.1, calculating the Euclidean distance between any two points of the constrained target image to be detected and the template image by adopting the Euclidean distance;
specifically, firstly, a feature point descriptor of a constrained target image to be detected and a feature point descriptor of a template image are obtained, and then the feature point descriptor of the target image to be detected and the feature point descriptor of the template image are substituted into an Euclidean distance calculation formula:
in the above formula, R i Is a descriptor of the feature points of the template image, r ij Is R i One point of (1), S i Is a descriptor of the characteristic points of the target image to be measured, s ij Is S i One point of (1), d (R) i ,S i ) The Euclidean distance between any two points of the template image characteristic point descriptor and the target image characteristic point descriptor to be detected.
S3.3.2, eliminating error matching points according to the Euclidean distance and the data structure of the kd tree to obtain a homonymy point set;
specifically, the matching of the feature points adopts a data structure of a kd tree to complete the search; searching an original image feature point most adjacent to the feature point of the target image to be detected and an original image feature point next adjacent to the feature point of the target image to be detected by taking the feature point of the target image to be detected as a reference; the calculation formula of eliminating the error matching points is as follows:
in the above formula, Threshold is image binarization.
And when the condition of the formula is met, the paired feature point descriptors which are finally left are used, and the homonymy point set is obtained.
S3.3.3, constructing a matching area according to the same name point set.
S4, according to the output of the airborne camera and the output of the inertia measurement unit, constructing a visual inertia odometer and acquiring the local pose of the unmanned aerial vehicle;
specifically, let T W,k Is the transition from the drone at key frame k to the ENU frame.
In the above formula, T W,k Is the k-th key frame local pose, C W,k Is the k-th frame pose and,is the position in the ENU coordinate system.
C W,k =(φ W,k ,θ W,k ,ψ W,k ),φ W,k Is the roll angle, θ W,k Is a pitch angle, psi W,k Is the yaw angle, C W,k The initial rotation matrix is obtained by calibrating a cameraAnd showing a posture change matrix between camera frames and frames.
The first step in the estimation is to perform a visual odometer on the drone image. Inputs are corrected grayscale images and non-static drone to sensor conversionCalculated at 10Hz per frame. It is computed by a complex transformation using known translations between three angles and then rotated into a standard camera frame. The roll and pitch axes are globally stable in a gravity aligned inertial frame as yaw follows the drone heading.
For each frame of image, features are extracted and SIFT descriptors are matched between frames to perform landmark triangulation, features that cannot be triangulated by matching triangulate are triangulated by motion between successive frames. The descriptors in the latest image are matched with the last keyframe to generate 2D-3D point correspondences. Determining a key frame pose T from a current frame to a last key frame using a maximum likelihood estimation sample consensus (MLESAC) estimator f,k . If the translation or rotation exceeds a threshold, or the number of inliers is below a minimum, a new key frame is added. For each new keyframe, a windowed refinement (beam adjustment) is performed using simultaneous trajectory estimation and mapping (STEAM).
S5, referring to FIG. 7, performing state joint estimation on the matching area and the local pose of the unmanned aerial vehicle by using an extended Kalman filtering algorithm to obtain the real-time pose of the unmanned aerial vehicle; by utilizing the consistency principle and comparing the two principles, the drift of the inertial odometer is weakened, and the accuracy of image matching is improved by multi-source information fusion.
In particular, T in combination with a visual odometer k,k-1 Initial pose T of uncertain relative transformation and image registration k,0 And performing state fusion on the uncertain attitude measurement values. Notably, T W,0 Is a transformation from the local coordinate system to the global coordinate system, which is constructed from the RTK pose at the first keyframe. Therefore, the extended kalman filter algorithm equation is expressed as follows:
in the above formula, the first and second carbon atoms are,is an error covariance matrix, Q k Is the covariance matrix of the process noise, T k,k-1 Is the state transition matrix, τ, of the drone from k to k at key frame k-1 k,k-1 Is T k,k-1 The companion matrix of (a) is,is an estimate of the initial pose of the object,is an error covariance matrix, R, of the second order approximation of the unmanned aerial vehicle at key frame k k Is a covariance matrix of the measured noise, K k Is Kalman gain, ln (·) ∨ And exp (· ^) is the operator of SE (3).
Wherein SE (3) is rotation plus displacement, also called Euclidean transformation and rigid body transformation, and generally we use matrixWhere R is rotation and t is displacement, so there are 6 degrees of freedom, 3 rotations, and 3 positions.
Further, as a preferred embodiment of the method, the method further comprises the steps of correcting the output of the onboard camera and the output of the inertial measurement; the airborne data processing unit is used for processing data in real time, so that the task load of a data return and remote control center is reduced, and the timeliness and the safety of the scheme are improved.
Specifically, the correction processing of the output of the onboard camera and the output of the inertial measurement includes image frame distortion correction, camera coordinate conversion, data time error correction and time alignment, inertial measurement unit error correction, and data reasonableness selection.
1) Image frame distortion correction
Distortion is an offset to a linear projection, and distortion is simply a straight line projected onto a picture that cannot be maintained as a straight line. Distortions can be divided into two broad categories, including radial and tangential distortions.
The effects of radial distortion are three, one is barrel distortion, the other is pincushion distortion, and the combination of the two is called mustache distortion.
The correction calculation operation for radial distortion is as follows:
the correction calculation operation for the tangential distortion is as follows:
after the distortion correction, 5 distortion parameters D ═ k (k) were obtained 1 ,k 2 ,k 3 ,p 1 ,p 2 );
In the above formula, (x) corr ,y corr ) Is the true observed distorted coordinate on the image plane, (x) dis ,y dis ) For distortion corrected coordinates, k 1 、k 2 And k 3 As a parameter of radial distortion, p 1 And p 2 A tangential distortion parameter.
2) Referring to fig. 4, camera coordinate conversion
In image processing, stereo vision, the world coordinate system, the camera coordinate system, the image coordinate system and the pixel coordinate system are often involved. How a point is converted from the world coordinate system to the pixel coordinate system can be obtained through the conversion of the four coordinate systems.
The specific calculation operation of world coordinates to pixel coordinates is as follows:
in the above formula, the first and second carbon atoms are,is an internal reference of the camera and is a reference of the camera,is an external parameter of the camera; the internal reference and the external reference of the camera can be obtained through Zhang Zhengyou calibration.
From the above conversion operation, a coordinate point in three dimensions can find a corresponding pixel point in the image, but conversely, finding its corresponding point in three dimensions from a point in the image is a problem because we do not know Z on the left of the equation c The value is obtained.
3) Data time error correction and time alignment
The specific calculation formula of the time interpolation alignment is as follows:
in the above formula, T c (k) And α (k) is the equally spaced data sequence taken by the device, t s (k) Is a time series with intervals of 0.05s after the time alignment of the devices.
4) inertial measurement unit error correction
When the multi-axis inertia measurement unit is manufactured, the axis may not be vertical; therefore, before use, calibration is needed, namely, the error of the inertial measurement unit is corrected.
Specifically, the accelerometer is calibrated by a six-surface calibration method. And (3) respectively placing the three axes of the accelerometer upwards or downwards horizontally for a period of time, and collecting six-sided data to finish calibration. The relationship between actual acceleration and acceleration measurements when considering the inter-axis error is:
in the above formula, l is the actual acceleration, a is the measured acceleration value, s is the scale factor, m is the non-perpendicular factor, and b is the measurement deviation, and the variables in the above formula can be calculated by the least square method.
Unlike the six-sided accelerometer, where the actual values of the gyroscope are provided by a high-precision turret, the six-sided are referred to as clockwise and counterclockwise rotation of the respective axes, although the principles of calibration are similar. The inertial measurement unit is only required to be horizontally placed, but a large amount of data is usually required to be measured to reduce the influence of uncertain errors, and an optimal solution is obtained by using an LM algorithm.
5) Data reasonability check
For data measured by an accelerometer, a gyroscope and a magnetic compass of an inertia measuring unit and a real-time frame of monocular camera measuring equipment, the rationality test is required. The rationality is examined mainly for rejecting the wild value in the survey unit, avoids the measured value that contains gross error to pass through the filter, and then influences the tracking accuracy. And for the real-time image frame, only the matching result can be processed, the estimation result of the inertial measurement unit and the matching positioning result at the previous moment are compared in real time based on the consistency principle, and some matching results with larger positioning deviation are removed. The method adopts a detection algorithm facing to a test element based on a five-point linear prediction method.
As a further preferred embodiment of the method, in order to implement the above technical solution, with reference to fig. 8, the present invention provides a monocular unmanned aerial vehicle absolute visual matching positioning system based on a geographic base map, including:
the template image construction module is used for constructing a geographical base map of the flight area of the unmanned aerial vehicle and extracting features to obtain a template image;
the target image to be detected construction module is used for acquiring the picture of the airborne camera and extracting the characteristics to obtain a target image to be detected;
the matching area construction module is used for carrying out motion constraint on the unmanned aerial vehicle based on the inertial measurement unit and matching the template image with the target image to be detected to obtain a matching area;
the local pose estimation module is used for constructing a visual inertial odometer and acquiring the local pose of the unmanned aerial vehicle according to the output of the airborne camera and the output of the inertial measurement unit;
and the real-time pose estimation module is used for performing state joint estimation on the matching area and the local pose of the unmanned aerial vehicle by using an extended Kalman filtering algorithm to obtain the real-time pose of the unmanned aerial vehicle.
The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.
The beneficial effects of the invention specifically comprise:
1) the unmanned aerial vehicle absolute visual positioning technology based on the geographic base map overcomes the problems of error accumulation effect and time drift of relative visual positioning, but does not damage the positioning precision and the efficiency of a relative positioning scheme.
2) The visual sensor of the unmanned aerial vehicle absolute visual positioning technology based on the geographical base map is used as a passive perception sensor, senses the surrounding environment information by means of external illumination through images, has the characteristics of small volume, low price, rich information quantity, strong positioning autonomy and reliability, high positioning precision and the like, is autonomous navigation equipment very suitable for the unmanned aerial vehicle, and can be used as effective supplement of GNSS navigation means in most photosensitive scenes based on the visual positioning technology.
3) The unmanned aerial vehicle absolute visual positioning technology based on the geographic base map can replace a GPS scheme to acquire the short-period absolute positioning information of the unmanned aerial vehicle when the precise absolute position cannot be acquired under the condition that a GPS signal is noisy or the signal is rejected and the GPS signal is rejected or the positioning has large deviation in a complex battlefield environment.
4) The unmanned aerial vehicle absolute visual positioning technology based on the geographical base map has huge search and rescue advantages and low cost, forms a task cluster suitable for search and rescue tasks and search and rescue environments, has high flexibility, has no centralized mode, and greatly reduces the difficulty in maintenance and upgrading.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (7)
1. The utility model provides a monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map which characterized in that includes following step:
constructing a geographical base map of an unmanned aerial vehicle flight area and extracting characteristics to obtain a template image;
acquiring a picture of an onboard camera and extracting characteristics to obtain an image of a target to be detected;
carrying out motion constraint on the unmanned aerial vehicle based on an inertial measurement unit, and matching the template image with a target image to be detected to obtain a matching area;
according to the output of the airborne camera and the output of the inertia measurement unit, a visual inertia odometer is constructed and the local pose of the unmanned aerial vehicle is obtained;
and performing state joint estimation on the matching area and the local pose of the unmanned aerial vehicle by using an extended Kalman filtering algorithm to obtain the real-time pose of the unmanned aerial vehicle.
2. The absolute visual matching positioning scheme for the monocular unmanned aerial vehicle based on the geographic base map as claimed in claim 1, wherein the step of constructing the geographic base map of the unmanned aerial vehicle flight area and performing feature extraction to obtain the template image specifically comprises:
selecting a downloading source according to the longitude and latitude coordinates, the image zooming level and the image style of the unmanned aerial vehicle flight area to obtain an image tile;
unifying and fusing image tile coordinates to obtain a coordinate scheme;
writing the coordinate scheme into world coordinates of four corner points of the template to be processed to obtain a geographic base map;
and extracting the feature points of the geographic base map, determining the directions of the feature points, and constructing feature point description to obtain a template image.
3. The absolute visual matching positioning scheme for the monocular unmanned aerial vehicle based on the geographical base map as claimed in claim 1, wherein the step of performing motion constraint on the unmanned aerial vehicle based on the inertial measurement unit, matching the template image with the target image to be detected to obtain the matching area specifically comprises:
estimating the motion state of the unmanned aerial vehicle by using an error state Kalman filtering algorithm based on an inertial measurement unit;
establishing a constrained target image to be detected according to the motion state of the unmanned aerial vehicle;
and matching the constrained target image to be detected with the template image to obtain a matching area.
4. The absolute visual matching positioning scheme for the monocular unmanned aerial vehicle based on the geographical base map as claimed in claim 3, wherein the step of estimating the motion state of the unmanned aerial vehicle by using the error state kalman filter algorithm based on the inertial measurement unit specifically comprises:
performing integral updating on the nominal state of the unmanned aerial vehicle by using an inertia measurement unit;
updating time and measurement of the error state of the unmanned aerial vehicle by using an error state Kalman filtering algorithm;
and combining the nominal state and the error state to obtain the motion state of the unmanned aerial vehicle.
5. The absolute visual matching positioning scheme for the monocular unmanned aerial vehicle based on the geographical base map as claimed in claim 3, wherein the step of matching the constrained target image to be detected with the template image to obtain the matching area specifically comprises:
calculating the Euclidean distance between any two points of the constrained target image to be detected and the template image by adopting the Euclidean distance;
eliminating error matching points according to the Euclidean distance and the data structure of the kd tree to obtain a homonymy point set;
and constructing a matching area according to the same-name point set.
6. The absolute visual matching positioning scheme for the monocular unmanned aerial vehicle based on the geographical base map as recited in claim 1, wherein the extended kalman filter algorithm equation is expressed as follows:
in the above formula, the first and second carbon atoms are,is an error covariance matrix, Q k Is the covariance matrix of the process noise, T k,k-1 Is the state transition matrix, tau, of the drone at k-1, k everywhere k,k-1 Is T k,k-1 The companion matrix of (a) is,is an estimate of the initial pose of the object,is an error covariance matrix, R, of the second order approximation of the unmanned aerial vehicle at key frame k k Is a covariance matrix of the measured noise, K k Is Kalman gain, ln (·) ∨ And exp (·) ∧ ) Is the operator of SE (3).
7. The absolute visual matching positioning scheme of the monocular unmanned aerial vehicle based on the geographic base map is characterized by further comprising a step of correcting the output of an airborne camera and the output of an inertial measurement unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210418712.XA CN114842224A (en) | 2022-04-20 | 2022-04-20 | Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210418712.XA CN114842224A (en) | 2022-04-20 | 2022-04-20 | Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114842224A true CN114842224A (en) | 2022-08-02 |
Family
ID=82566041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210418712.XA Pending CN114842224A (en) | 2022-04-20 | 2022-04-20 | Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114842224A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116203554A (en) * | 2023-05-06 | 2023-06-02 | 武汉煜炜光学科技有限公司 | Environment point cloud data scanning method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111504323A (en) * | 2020-04-23 | 2020-08-07 | 湖南云顶智能科技有限公司 | Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion |
CN112347840A (en) * | 2020-08-25 | 2021-02-09 | 天津大学 | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method |
CN112577493A (en) * | 2021-03-01 | 2021-03-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance |
CN112985416A (en) * | 2021-04-19 | 2021-06-18 | 湖南大学 | Robust positioning and mapping method and system based on laser and visual information fusion |
CN114066972A (en) * | 2021-10-25 | 2022-02-18 | 中国电子科技集团公司第五十四研究所 | Unmanned aerial vehicle autonomous positioning method based on monocular vision |
-
2022
- 2022-04-20 CN CN202210418712.XA patent/CN114842224A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111504323A (en) * | 2020-04-23 | 2020-08-07 | 湖南云顶智能科技有限公司 | Unmanned aerial vehicle autonomous positioning method based on heterogeneous image matching and inertial navigation fusion |
CN112347840A (en) * | 2020-08-25 | 2021-02-09 | 天津大学 | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method |
CN112577493A (en) * | 2021-03-01 | 2021-03-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance |
CN112985416A (en) * | 2021-04-19 | 2021-06-18 | 湖南大学 | Robust positioning and mapping method and system based on laser and visual information fusion |
CN114066972A (en) * | 2021-10-25 | 2022-02-18 | 中国电子科技集团公司第五十四研究所 | Unmanned aerial vehicle autonomous positioning method based on monocular vision |
Non-Patent Citations (1)
Title |
---|
蓝朝桢;阎晓东;崔志祥;秦剑琪;姜怀刚;: "用于无人机自主绝对定位的实时特征匹配方法", 测绘科学技术学报, no. 03, 15 June 2020 (2020-06-15) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116203554A (en) * | 2023-05-06 | 2023-06-02 | 武汉煜炜光学科技有限公司 | Environment point cloud data scanning method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111561923B (en) | SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion | |
CN109297510B (en) | Relative pose calibration method, device, equipment and medium | |
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN106780699B (en) | Visual SLAM method based on SINS/GPS and odometer assistance | |
CN101598556B (en) | Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment | |
EP2503510B1 (en) | Wide baseline feature matching using collaborative navigation and digital terrain elevation data constraints | |
Panahandeh et al. | Vision-aided inertial navigation based on ground plane feature detection | |
CN109708649B (en) | Attitude determination method and system for remote sensing satellite | |
CN107909614B (en) | Positioning method of inspection robot in GPS failure environment | |
CN109903330B (en) | Method and device for processing data | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN115574816B (en) | Bionic vision multi-source information intelligent perception unmanned platform | |
Samadzadegan et al. | Autonomous navigation of Unmanned Aerial Vehicles based on multi-sensor data fusion | |
CN109871739B (en) | Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL | |
CN108613675B (en) | Low-cost unmanned aerial vehicle movement measurement method and system | |
Kinnari et al. | GNSS-denied geolocalization of UAVs by visual matching of onboard camera images with orthophotos | |
CN116184430B (en) | Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit | |
CN115560760A (en) | Unmanned aerial vehicle-oriented vision/laser ranging high-altitude navigation method | |
Kaufmann et al. | Shadow-based matching for precise and robust absolute self-localization during lunar landings | |
Xian et al. | Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach | |
CN115711616B (en) | Smooth positioning method and device for indoor and outdoor traversing unmanned aerial vehicle | |
Mansur et al. | Real time monocular visual odometry using optical flow: study on navigation of quadrotors UAV | |
CN114842224A (en) | Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map | |
CN117330052A (en) | Positioning and mapping method and system based on infrared vision, millimeter wave radar and IMU fusion | |
CN115930948A (en) | Orchard robot fusion positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |