CN110631588B - Unmanned aerial vehicle visual navigation positioning method based on RBF network - Google Patents
Unmanned aerial vehicle visual navigation positioning method based on RBF network Download PDFInfo
- Publication number
- CN110631588B CN110631588B CN201910924244.1A CN201910924244A CN110631588B CN 110631588 B CN110631588 B CN 110631588B CN 201910924244 A CN201910924244 A CN 201910924244A CN 110631588 B CN110631588 B CN 110631588B
- Authority
- CN
- China
- Prior art keywords
- image
- feature point
- aerial vehicle
- unmanned aerial
- descriptor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000000007 visual effect Effects 0.000 title claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims description 31
- 230000008569 process Effects 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 16
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 12
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 2
- 210000002569 neuron Anatomy 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 230000004044 response Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000013480 data collection Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000005316 response function Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unmanned aerial vehicle visual navigation positioning method based on a RBF network. The scheme of the invention is as follows: when the GNSS signal is not lost, acquiring an image through a camera, extracting an image frame from the camera, detecting a feature point of each image, extracting a descriptor, and keeping the feature point information of the image; repeating the processing of the descriptors of the extracted image frames, and storing descriptor information and positioning information into a visual database; under the condition of GNSS signal loss, extracting images shot by a camera, extracting descriptors, and training an RBF network classifier by using visual database information: and then performing neighborhood search on the generated descriptors according to the RBF network classifier, estimating the optimal matching position and obtaining the current positioning information based on the positioning information recorded by the optimal matching position. According to the invention, under the condition that GNSS signals are lost, positioning and navigation processing of the unmanned aerial vehicle can be realized based on the visual database constructed by the GNSS signals, and the visual database only stores characteristic point descriptor information of images, so that the occupied space of a memory is small.
Description
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle navigation positioning, and particularly relates to an unmanned aerial vehicle visual navigation positioning method based on a Radial Basis Function (RBF) network.
Background
The unmanned aerial vehicle comprehensive positioning system plays a crucial role in stability and integrity. The most common solution for positioning involves combining the Global Navigation Satellite System (GNSS) and the Inertial Navigation System (INS) within a multi-sensor fusion framework. In this case, GNSS is used as a compact and economical method to constrain the unbounded errors generated by the INS sensors during the positioning process. In fact, however, the INS integrates time in an iterative process of continuously acquiring data from a plurality of sensors to obtain an approximate drone position, and in this process, measurement errors generated by the sensors are rapidly accumulated and increase without limitation. Therefore, most drones use an Extended Kalman Filter (EKF) framework to fuse data from INS and GNSS, which combines the short-term accuracy of the inertial navigation system with the long-term accuracy of the global navigation satellite system, thereby effectively suppressing the positioning error. Therefore, the global navigation satellite system is widely used for various drones.
Despite the advantages of the global navigation satellite system, it has proven unreliable in many of the documented instances. Outdoor scenes such as urban canyons, forests, jungles, and rainy regions also prove to be vulnerable to both intentional attacks and unintentional environmental disturbances. In addition to this, drones using global navigation satellite systems have proven to be vulnerable to signal spoofing on a number of occasions, and such attacks are now becoming a reality. A disadvantage of using global navigation satellite systems in drone navigation is the radio communication necessary to acquire positioning data, which radio communication systems are generally prone to usability problems, interference and signal changes. The root cause for using GNSS/INS fusion is to rely on global information obtained from GNSS to solve the local positioning problem. In order to solve the problems, a proper navigation sensor and a new navigation algorithm are introduced to solve the problem of navigation and positioning of the unmanned aerial vehicle when the unmanned aerial vehicle is interfered by wireless communication and the GNSS/INS has short-term or long-term faults.
One popular method of reliably determining drone position in an outdoor environment where gnss rejection/gnss degradation is to use monocular 2D cameras in combination with vision-based techniques. These techniques fall into two categories: one is a technique using a priori knowledge of the environment and a technique that does not use a priori knowledge of the environment. In the field of visual navigation using a priori knowledge, map-based navigation techniques seem to be advanced, which match images taken by drones with previously flown high resolution landmark satellite images or landmark images, and the limitations of this solution include the need for a large database of geographic images, database access by network-connected onboard devices, and another important limitation is the need to know the starting point or predefined boundaries in advance. Therefore, the map-based solution has serious limitations, which prevent its application in real scenes. The second category of vision-based techniques does not have this limitation because they do not require prior knowledge of the environment. This class of solutions includes visual measurement and simultaneous localization and mapping (SLAM), among others. In vision measurement, the motion of the drone is estimated by tracking features or pixels between successive images obtained from monocular cameras. However, even the most advanced monocular vision measurements are affected over time because the current location estimate is based on previous location estimates, resulting in a buildup of errors. With respect to visual measurements, SLAM solves the localization problem while building an environmental map. The map building requires multiple steps such as tracking, repositioning and loop closure, and this solution is always accompanied by heavy computation and memory usage.
Disclosure of Invention
The invention aims to: aiming at the existing problems, the RBF network-based visual navigation positioning method for the unmanned aerial vehicle is provided, and by acquiring the ground image feature descriptors in the navigation process of the unmanned aerial vehicle and using the RBF network classifier trained by the feature description sub data set to perform neighborhood search on the feature point descriptors of the currently acquired images, the optimal matching position of the current images is obtained, so that more accurate positioning information of the location of the unmanned aerial vehicle is estimated.
The invention discloses an unmanned aerial vehicle visual navigation positioning method based on an RBF network, which comprises the following steps:
step S1: setting an RBF neural network for matching the feature point descriptors of the image, and training the neural network;
wherein, the training sample is: in the navigation process of the unmanned aerial vehicle, images collected by an airborne camera are used; the feature vectors of the training samples are: feature point descriptors of the image obtained through ORB feature point detection processing;
step S2: constructing a visual database of the unmanned aerial vehicle during navigation:
in the navigation process of the unmanned aerial vehicle, images are collected through an airborne camera, ORB feature point detection processing is carried out on the collected images, descriptors of all feature points are extracted, and feature point descriptors of the current images are obtained; storing the feature point descriptors of the image and positioning information during image acquisition into a visual database;
and step S3: unmanned aerial vehicle vision navigation positioning based on visual database:
based on a fixed interval period, extracting an image acquired by an airborne camera to serve as an image to be matched;
carrying out ORB feature point detection processing on the image to be matched, and extracting a descriptor of each feature point to obtain a feature point descriptor of the image to be matched;
inputting the feature point descriptor of the image to be matched into a trained RBF neural network, and performing neighborhood search to obtain the optimal matching feature point descriptor of the image to be matched in a visual database;
and obtaining the current visual navigation positioning result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptor.
Further, step S3 further includes: detecting whether the similarity between the optimal matching feature point descriptor and the feature point descriptor of the image to be matched is smaller than a preset similarity threshold value or not; if so, obtaining a current visual navigation positioning result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptor; otherwise, the navigation is continued based on the recently obtained visual navigation positioning result.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
(1) The visual database only stores the feature point descriptor information of the image, so that the occupied space of a memory is reduced;
(2) Under the condition of no reference image library, the visual database can be directly accessed to match images shot by the unmanned aerial vehicle;
(3) And (4) realizing feature descriptor neighborhood search based on the RBF network obtained by training, obtaining the best matching position, and estimating positioning information.
Drawings
FIG. 1 is a visual positioning overall system framework;
FIG. 2 is a flow chart of ORB feature point detection;
FIG. 3 is a schematic diagram of rough feature point extraction during ORB feature point extraction;
FIG. 4 is a schematic diagram of an RBF network architecture;
fig. 5 is a schematic diagram of a matching positioning process of the RBF network.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
According to the unmanned aerial vehicle navigation positioning method based on the vision, the ground image feature descriptor of the current region where the unmanned aerial vehicle is located is acquired, the RBF network classifier trained by the feature description subdata set is used for carrying out neighborhood search on the feature point descriptor of the acquired image, and the optimal matching position of the image is obtained, so that more accurate positioning information of the location of the unmanned aerial vehicle is estimated.
Referring to fig. 1, the method for positioning navigation of unmanned aerial vehicle based on vision mainly comprises two parts: firstly, data acquisition during the outbound, and secondly, positioning estimation during the return voyage;
in the data acquisition part, acquiring images through a camera, extracting image frames from the camera, detecting characteristic points of each image, extracting a descriptor, discarding image data in the process, and keeping the characteristic point information of the image; repeating the processing of the descriptors of the extracted image frames, and storing descriptor information and positioning information into a visual database;
during positioning estimation processing, under the condition that GNSS signals are lost, images shot by a camera are extracted, descriptor extraction is also carried out, and an RBF network classifier is trained by using visual database information: then performing neighborhood search on the generated descriptors according to an RBF network classifier, and estimating an optimal matching position; and finally, estimating the positioning information of the current image according to the positioning information stored in the visual database at the optimal matching position.
The method comprises the following concrete implementation steps:
(1) And (4) collecting data.
Images are acquired from an onboard camera, ORB (ordered Brief) feature point detection is performed on each frame of image, descriptors of each feature point are extracted, and then database entries are created and stored, wherein the database entries are composed of previously extracted feature point descriptor sets and corresponding positioning information. The positioning information is composed of attitude information and position information provided by an airborne equipment application program of the unmanned aerial vehicle, and the format or the property of the information is highly dependent on the specific application program adopted.
(2) And (4) extracting features.
ORB feature point detection adopts FAST (features from accessed segment test) algorithm to detect feature points on each layer of the scale pyramid. And detecting a circle of pixel values around the candidate characteristic point based on the gray value of the image around the characteristic point, and if the gray value difference between enough pixel points in the area around the candidate point and the gray value of the candidate point is large enough (namely the gray value difference is larger than a preset threshold), considering the candidate point as a characteristic point.
The method comprises the following specific steps:
1) And detecting ORB feature points.
Referring to fig. 2, when detecting orb feature points, firstly, FAST corner detection is applied to an input image; then, calculating a Harris corner response value from the selected FAST characteristic points by using a Harris corner measurement method; then according to the result of the angular point response value sorting, picking out N characteristic points with the maximum response values; then, calculating the direction of the ORB characteristic point by adopting a gray centroid method, and adopting BRIEF as a characteristic point description method; finally, each feature point generates 256-bit binary point pairs.
The method comprises the steps that FAST characteristic points are detected by using a FAST characteristic point detection method for ORB characteristics, harris corner response values are calculated from selected FAST characteristic points by using a Harris corner measurement method, and the first N characteristic points with the maximum response values are selected.
Wherein, the corner response function f of FAST characteristic points CRF Defined as:
wherein epsilon d And if the value is a threshold value, I (x) is the pixel value of a pixel point in the neighborhood of the point to be measured, and I × p) is the pixel value of the current point to be measured.
The sum of the corner response function values of the point to be measured and all the corresponding surrounding points is denoted as N, and when N is greater than the set threshold value, the point to be measured is the FAST feature point, and the threshold value is usually 12.
The specific processing flow for ORB feature point extraction is as follows:
the first step is as follows: and (5) roughly extracting the feature points. Selecting a point in the image as p, taking p as the center of circle and 3 pixels as radius, detecting the pixel values of the corresponding points with the position numbers of 1, 5, 9 and 13 on the circumference (as shown in fig. 3, one of the points includes 16 positions, when rough extraction is performed, four points on the circumference in the four directions of the upper, lower, left and right of the center of circle p are detected), and if the pixel value of at least 3 points in the 4 points is greater than or less than the pixel value of the p point, then the p point is considered as a feature point.
The second step is that: and removing local dense points. And (4) calculating by adopting a non-maximum inhibition algorithm, reserving the characteristic points at the maximum position, and deleting the rest characteristic points.
The third step: the scale invariance of the feature points. And establishing a pyramid to realize multi-scale invariance of the feature points. A scale factor scale (e.g. 1.2) and the pyramid level levels nlevels (e.g. 8 levels) are set. The original image is down-sampled into n levels of images according to the scale factor, and the relation between each level of down-sampled image I' and the original image I is as follows:
I’=I/scale k (k=1,2,…,8)
the fourth step: rotational invariance of feature points. And calculating the direction of the characteristic point by adopting a gray scale centroid method, wherein the moment in the radius r range of the characteristic point is the centroid, and the vector formed between the characteristic point and the centroid is the direction of the characteristic point.
The vector angle theta of the feature point and the centroid C is the main direction of the feature point:
θ=arctan(C x ,C y )
wherein (C) x ,C y ) Representing the coordinates of the centroid C.
2) And generating a characteristic point descriptor.
ORB features use descriptor BRIEF as feature point description method. The BRIEF descriptor is composed of a binary string of length n, where n is 256 in this embodiment. The binary value τ (p: x, y) of a bit in the descriptor is calculated as follows:
wherein p (x) and p (y) are the respective gray levels of two points in a pair of points, and n is a feature descriptor f formed by the pair of points n (p) can be expressed as:
f n (p)=∑ 1≤i≤n 2 i-1 τ(p:x,y)
constructing affine transformation matrix R θ Making the descriptor rotationally invariant, resulting in a rotation of the generator matrix SCorrected version S θ :
S θ =R θ S
Wherein the generator matrix S is n point pairs (x) i ,y i ) I =1,2,. Said, 2n, theta is the principal direction of the feature point.
Finally obtained feature point descriptor g n (p,θ)=f n (p)|x i ,y i ∈S θ And 256-bit descriptors of the feature points are formed.
(3) And matching and positioning based on the RBF neural network.
When the GNSS/INS signal of the unmanned aerial vehicle is unavailable, the system prompts the unmanned aerial vehicle to return to the home. And matching the image descriptor extracted in the return process with the descriptor previously inserted into the database by utilizing the unmanned aerial vehicle motion information stored in the characteristic database to obtain the positioning information. The matching positioning system based on the RBF neural network consists of network pattern training and pattern positioning. The concrete mode is as follows:
1) A training mode is set.
And setting a training mode, learning the training samples and providing a classification decision.
The RBF network only contains one hidden layer, the distance between an input value and a central vector is taken as an independent variable of a function, and a radial basis function is taken as an activation function. The local approximation approach can simplify the computational complexity, since for one input X, only some neurons respond, and others are approximately 0, and w of the response adjusts the parameter.
Referring to FIG. 4, the RBF neural network is composed of an input layer, a hidden layer and an output layer, wherein
An input layer, the transformation from the input space to the hidden layer space being nonlinear;
a hidden layer, neurons using radial basis functions as activation functions, the hidden layer to output layer spatial transformation being linear;
the output layer adopts neurons of a linear function and is a linear combination of the neurons of the hidden layer;
the RBF network adopts RBF as the 'base' of the hidden unit to form a hidden layer space, and an input vector is directly mapped to the hidden space. After the center point is determined, the mapping relationship can be determined. The mapping from input to output of the network is nonlinear, the network output is linear for adjustable parameters, and the connection weight can be directly solved by a linear equation set, so that the learning speed is greatly increased, and the local minimum problem is avoided.
In the specific embodiment, the weight from the input layer of the RBF neural network to the hidden layer is fixed as 1, the transfer function of the hidden layer unit adopts a radial basis function, and the hidden layer neuron uses the weight vector w of the layer i And the input vector X i Vector distance and deviation b between i After multiplication, the input is the neuron activation function. Taking the radial basis function as a gaussian function, the output of the neuron is:
where x represents the input data, i.e. the input vector, x i σ is a function width parameter, centered on the basis function, used to determine the input vector to each radial basis layer neuron.
2) And (4) RBF neural network learning.
The RBF network has three parameters to learn: center x of basis function i And the variance σ and the weights w between the hidden layer and the output layer.
i. Determining a basis function center x i 。
Feature descriptor vectors of images acquired by a camera are used for generating a feature database, and a k-means clustering algorithm is adopted to determine the center x of a kernel function i Randomly selecting I different samples from the training samples as initial center x i (0) Random input of training sample X k Determining which center the training sample is closest to, finding the training sample satisfying:
i(X k )=argmin||X k -x i (n)||
wherein I =1,2 i (n) denotes the ith center of the radial basis function at the nth iteration, and the number of iteration steps n =0 is set. The basis function center is adjusted by the following formula:
where γ is the learning step size, 0< γ <1.
Namely, the center of the basis function is continuously updated by iteration training, and the updating formula is as follows: x is the number of i (n+1)=x i (n)+γ[X k (n)-x i (n)]When the change of the processing result of the latest two iterative updates does not exceed the preset threshold, the update is stopped (learning is finished), and x is considered to be i (n + 1) is approximately equal to x i (n) taking the updated basis function center of the last time as a final iteration training output result x i (I =1,2, \8230;, I). Otherwise, n = n +1.
Determining the variance σ of the basis function.
After determining the center of the RBF neural network, the width thereof is expressed as:
wherein M is the number of hidden layer units, d max Is the maximum distance between the selected centers.
Determining a hidden-layer-to-output-layer weight w.
The connection weight of the hidden layer to the output layer unit is calculated by adopting a least square method, namely
In the formula, g qi Weight, X, representing the center of the vector and basis function of the qth input sample q Is the qth inputA vector into the sample, q =1, 2.., N, I =1, 2.., I, N represents the number of samples.
3) And matching and positioning.
In view of the time sequence of images shot by the unmanned aerial vehicle, during return voyage, images shot by the camera can be extracted at intervals of fixed frames and features are extracted, for example, every ten frames are extracted to extract features, feature descriptor vectors are generated, neighborhood search is performed by using a trained RBF network classifier, and an optimal matching position is obtained, namely, an optimal matching result of the currently extracted feature descriptors and the feature descriptors of the shot images in the process of departure (from a departure point to a destination voyage) stored in a database is obtained based on the trained RBF network classifier, whether the similarity between the currently extracted descriptors and the optimal matching result does not exceed a preset similarity threshold value is detected, and if yes, the position of the currently optimal matching result is used as a current position estimation result of the unmanned aerial vehicle during return voyage, and positioning information is obtained.
Furthermore, the error compensation of the navigation system can be carried out on the obtained position estimation result to obtain the positioning information. If the similarity between the position and the optimal matching position is lower than a predefined similarity threshold value, the position can be defined as unknown, the ground image of the area where the unmanned aerial vehicle is located is continuously acquired, and navigation is further performed according to the speed and posture information of the unmanned aerial vehicle and the positioning result obtained last time.
wherein, the symbolIndicating the position estimation result (the position of the current best match result),indicates the final position estimation result (or position estimation result) of the most recent previous n times) J =1, 2.. And n, n is a preset value. That is, the average standard error of the obtained positioning results of the latest times is used as the current compensation amount, and the error compensation is performed on the current position estimation result to obtain the current final position estimation result
Referring to fig. 5, the RBF network matching location processing procedure of the present invention is:
firstly, randomly sampling feature descriptor information stored in a visual database; and training the two-level system data of the feature descriptors obtained by sampling (training RBF network): setting a training mode, and determining an RBF center by adopting K-means clustering; determining the RBF width d according to the obtained center; determining a connection weight from a hidden layer to an output layer by adopting a least square method according to the RBF center and the width, and finally determining an RBF network structure;
and obtaining an RBF network structure according to training, performing neighborhood matching on the feature point descriptors generated by the acquired images during return flight, judging the optimal matching position, and finally performing positioning estimation on the current image during return flight according to the stored positioning information.
In summary, the present invention starts with the acquisition of images by a camera during the data collection process, performs feature point detection on the images by using the ORB feature point extraction technique, and extracts descriptors for each keypoint. A database entry is created and stored, the database entry consisting of the previously extracted descriptor and the positioning information. Wherein the positioning information comprises attitude information and position information of the unmanned aerial vehicle. While the parameters to be solved in the RBF network are mainly 3: including the centers of the basis functions, the variance, and the weight from the hidden layer to the output layer. The method adopts a self-organizing selection center learning method, and firstly, an unsupervised learning process is used for solving the center and the variance of a hidden layer basis function; and in the second step, a supervised learning process is used, and finally, a weight value between the hidden layer and the output layer is directly obtained by using a least square method. In the visual matching positioning process, starting from capturing images in the return running state of the unmanned aerial vehicle, in order to reduce the similarity of adjacent images, one image is extracted at intervals of fixed frames, then key points are detected, and descriptors are extracted for each key point by using the same feature point extraction method as in the data collection process. And obtaining the closest distance between the current image and the descriptor previously inserted into the database by using an RBF network, and finding the optimal matching position. And estimating the positioning information of the current image according to the optimal matching position.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.
Claims (5)
1. An unmanned aerial vehicle visual navigation positioning method based on an RBF network is characterized by comprising the following steps:
step S1: setting an RBF neural network for matching the feature point descriptors of the image, and training the neural network;
the RBF neural network comprises an input layer, a hidden layer and an output layer, wherein a transfer function of the hidden layer adopts a radial basis function;
the training samples are: in the navigation process of the unmanned aerial vehicle, images are acquired through an onboard camera; the feature vectors of the training samples are: feature point descriptors of the image obtained through ORB feature point detection processing;
step S2: constructing a visual database of the unmanned aerial vehicle during navigation:
in the navigation process of the unmanned aerial vehicle, images are collected through an airborne camera, ORB feature point detection processing is carried out on the collected images, descriptors of all feature points are extracted, and feature point descriptors of the current images are obtained; storing the feature point descriptors of the image and positioning information during image acquisition into a visual database;
and step S3: unmanned aerial vehicle vision navigation positioning based on visual database:
based on a fixed interval period, extracting an image acquired by an airborne camera to serve as an image to be matched;
carrying out ORB feature point detection processing on the image to be matched, and extracting a descriptor of each feature point to obtain a feature point descriptor of the image to be matched;
inputting the feature point descriptor of the image to be matched into a trained RBF neural network, and performing neighborhood search to obtain the optimal matching feature point descriptor of the image to be matched in a visual database;
detecting whether the similarity between the optimal matching feature point descriptor and the feature point descriptor of the image to be matched is smaller than a preset similarity threshold value or not; if so, continuing navigating based on the recently obtained visual navigation positioning result; if not, obtaining the current position estimation result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptorAnd by estimating the result of the obtained positionError compensation of the navigation system is carried out to obtain the current visual navigation positioning result of the unmanned aerial vehicle
2. The method of claim 1, wherein a weight between an input layer to a hidden layer of the RBF neural network is fixed to 1.
3. The method of claim 2, wherein in training the RBF neural network, a plurality of basis function centers of the radial basis functions are determined using a k-means clustering algorithm;
the variance σ of the radial basis function is set as:wherein M is the number of cells in the hidden layer, d max Is the maximum distance between the centers of the basis functions;
4. The method of claim 1, wherein the positioning information comprises pose information and position information of the drone.
5. The method according to claim 1, wherein in step S3, the interval for extracting the images collected by the onboard camera is: extracted once every ten frames.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910924244.1A CN110631588B (en) | 2019-09-23 | 2019-09-23 | Unmanned aerial vehicle visual navigation positioning method based on RBF network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910924244.1A CN110631588B (en) | 2019-09-23 | 2019-09-23 | Unmanned aerial vehicle visual navigation positioning method based on RBF network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110631588A CN110631588A (en) | 2019-12-31 |
CN110631588B true CN110631588B (en) | 2022-11-18 |
Family
ID=68972992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910924244.1A Active CN110631588B (en) | 2019-09-23 | 2019-09-23 | Unmanned aerial vehicle visual navigation positioning method based on RBF network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110631588B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111221340B (en) * | 2020-02-10 | 2023-04-07 | 电子科技大学 | Design method of migratable visual navigation based on coarse-grained features |
CN111833395B (en) * | 2020-06-04 | 2022-11-29 | 西安电子科技大学 | Direction-finding system single target positioning method and device based on neural network model |
CN111860375A (en) * | 2020-07-23 | 2020-10-30 | 南京科沃信息技术有限公司 | Plant protection unmanned aerial vehicle ground monitoring system and monitoring method thereof |
CN114202583A (en) * | 2021-12-10 | 2022-03-18 | 中国科学院空间应用工程与技术中心 | Visual positioning method and system for unmanned aerial vehicle |
CN113936064B (en) * | 2021-12-17 | 2022-05-20 | 荣耀终端有限公司 | Positioning method and device |
CN115729269B (en) * | 2022-12-27 | 2024-02-20 | 深圳市逗映科技有限公司 | Unmanned aerial vehicle intelligent recognition system based on machine vision |
Family Cites Families (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5043737A (en) * | 1990-06-05 | 1991-08-27 | Hughes Aircraft Company | Precision satellite tracking system |
JP3256332B2 (en) * | 1993-05-24 | 2002-02-12 | 郁男 荒井 | Distance measuring method and distance measuring device |
US6249606B1 (en) * | 1998-02-19 | 2001-06-19 | Mindmaker, Inc. | Method and system for gesture category recognition and training using a feature vector |
TW475991B (en) * | 1998-12-28 | 2002-02-11 | Nippon Kouatsu Electric Co Ltd | Fault point location system |
TW426882B (en) * | 1999-10-29 | 2001-03-21 | Taiwan Semiconductor Mfg | Overlap statistic process control with efficiency by using positive and negative feedback overlap correction system |
JP4020143B2 (en) * | 2006-02-20 | 2007-12-12 | トヨタ自動車株式会社 | Positioning system, positioning method and car navigation system |
DE102006055563B3 (en) * | 2006-11-24 | 2008-01-03 | Ford Global Technologies, LLC, Dearborn | Correcting desired value deviations of fuel injected into internal combustion engine involves computing deviation value using square error method and correcting deviation based on computed deviation value |
CN101118280B (en) * | 2007-08-31 | 2011-06-01 | 西安电子科技大学 | Distributed wireless sensor network node self positioning method |
CN101476891A (en) * | 2008-01-02 | 2009-07-08 | 丘玓 | Accurate navigation system and method for movable object |
US8165728B2 (en) * | 2008-08-19 | 2012-04-24 | The United States Of America As Represented By The Secretary Of The Navy | Method and system for providing a GPS-based position |
CN101655561A (en) * | 2009-09-14 | 2010-02-24 | 南京莱斯信息技术股份有限公司 | Federated Kalman filtering-based method for fusing multilateration data and radar data |
CN101860622B (en) * | 2010-06-11 | 2014-07-16 | 中兴通讯股份有限公司 | Device and method for unlocking mobile phone |
CN102387526B (en) * | 2010-08-30 | 2016-02-10 | 中兴通讯股份有限公司 | A kind of method and device improving wireless cellular system positioning precision |
CN103561463B (en) * | 2013-10-24 | 2016-06-29 | 电子科技大学 | A kind of RBF neural indoor orientation method based on sample clustering |
CN103983263A (en) * | 2014-05-30 | 2014-08-13 | 东南大学 | Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network |
CN104330084B (en) * | 2014-11-13 | 2017-06-16 | 东南大学 | A kind of submarine navigation device neural network aiding Combinated navigation method |
US9652688B2 (en) * | 2014-11-26 | 2017-05-16 | Captricity, Inc. | Analyzing content of digital images |
CN105891863B (en) * | 2016-06-16 | 2018-03-20 | 东南大学 | It is a kind of based on highly constrained EKF localization method |
CN106203261A (en) * | 2016-06-24 | 2016-12-07 | 大连理工大学 | Unmanned vehicle field water based on SVM and SURF detection and tracking |
US10514711B2 (en) * | 2016-10-09 | 2019-12-24 | Airspace Systems, Inc. | Flight control using computer vision |
CN106709909B (en) * | 2016-12-13 | 2019-06-25 | 重庆理工大学 | A kind of flexible robot's visual identity and positioning system based on deep learning |
CN106780484A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor |
CN107030699B (en) * | 2017-05-18 | 2020-03-10 | 广州视源电子科技股份有限公司 | Pose error correction method and device, robot and storage medium |
CN108426576B (en) * | 2017-09-15 | 2021-05-28 | 辽宁科技大学 | Aircraft path planning method and system based on identification point visual navigation and SINS |
CN107808407B (en) * | 2017-10-16 | 2020-12-18 | 亿航智能设备(广州)有限公司 | Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium |
CN108051836B (en) * | 2017-11-02 | 2022-06-10 | 中兴通讯股份有限公司 | Positioning method, device, server and system |
CN107909600B (en) * | 2017-11-04 | 2021-05-11 | 南京奇蛙智能科技有限公司 | Unmanned aerial vehicle real-time moving target classification and detection method based on vision |
CN107862705B (en) * | 2017-11-21 | 2021-03-30 | 重庆邮电大学 | Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics |
CN108153334B (en) * | 2017-12-01 | 2020-09-25 | 南京航空航天大学 | Visual autonomous return and landing method and system for unmanned helicopter without cooperative target |
CN108168539B (en) * | 2017-12-21 | 2021-07-27 | 儒安物联科技集团有限公司 | Blind person navigation method, device and system based on computer vision |
CN109959898B (en) * | 2017-12-26 | 2023-04-07 | 中国船舶重工集团公司七五〇试验场 | Self-calibration method for base type underwater sound passive positioning array |
CN108364314B (en) * | 2018-01-12 | 2021-01-29 | 香港科技大学深圳研究院 | Positioning method, system and medium |
CN108820233B (en) * | 2018-07-05 | 2022-05-06 | 西京学院 | Visual landing guiding method for fixed-wing unmanned aerial vehicle |
CN109141194A (en) * | 2018-07-27 | 2019-01-04 | 成都飞机工业(集团)有限责任公司 | A kind of rotation pivot angle head positioning accuracy measures compensation method indirectly |
CN109238288A (en) * | 2018-09-10 | 2019-01-18 | 电子科技大学 | Autonomous navigation method in a kind of unmanned plane room |
CN109739254B (en) * | 2018-11-20 | 2021-11-09 | 国网浙江省电力有限公司信息通信分公司 | Unmanned aerial vehicle adopting visual image positioning in power inspection and positioning method thereof |
CN109670513A (en) * | 2018-11-27 | 2019-04-23 | 西安交通大学 | A kind of piston attitude detecting method based on bag of words and support vector machines |
CN109445449B (en) * | 2018-11-29 | 2019-10-22 | 浙江大学 | A kind of high subsonic speed unmanned plane hedgehopping control system and method |
CN109615645A (en) * | 2018-12-07 | 2019-04-12 | 国网四川省电力公司电力科学研究院 | The Feature Points Extraction of view-based access control model |
CN109658445A (en) * | 2018-12-14 | 2019-04-19 | 北京旷视科技有限公司 | Network training method, increment build drawing method, localization method, device and equipment |
CN109859225A (en) * | 2018-12-24 | 2019-06-07 | 中国电子科技集团公司第二十研究所 | A kind of unmanned plane scene matching aided navigation localization method based on improvement ORB Feature Points Matching |
CN109765930B (en) * | 2019-01-29 | 2021-11-30 | 理光软件研究所(北京)有限公司 | Unmanned aerial vehicle vision navigation |
CN109991633A (en) * | 2019-03-05 | 2019-07-09 | 上海卫星工程研究所 | A kind of low orbit satellite orbit determination in real time method |
CN110058602A (en) * | 2019-03-27 | 2019-07-26 | 天津大学 | Multi-rotor unmanned aerial vehicle autonomic positioning method based on deep vision |
CN110032965B (en) * | 2019-04-10 | 2023-06-27 | 南京理工大学 | Visual positioning method based on remote sensing image |
-
2019
- 2019-09-23 CN CN201910924244.1A patent/CN110631588B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110631588A (en) | 2019-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110631588B (en) | Unmanned aerial vehicle visual navigation positioning method based on RBF network | |
CN110856112B (en) | Crowd-sourcing perception multi-source information fusion indoor positioning method and system | |
CN111028277B (en) | SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network | |
Leira et al. | Object detection, recognition, and tracking from UAVs using a thermal camera | |
CN112070807B (en) | Multi-target tracking method and electronic device | |
CN106529538A (en) | Method and device for positioning aircraft | |
CN109671119A (en) | A kind of indoor orientation method and device based on SLAM | |
CN112325883B (en) | Indoor positioning method for mobile robot with WiFi and visual multi-source integration | |
CN109099929B (en) | Intelligent vehicle positioning device and method based on scene fingerprints | |
CN113313763B (en) | Monocular camera pose optimization method and device based on neural network | |
Tao et al. | Scene context-driven vehicle detection in high-resolution aerial images | |
CN112419374A (en) | Unmanned aerial vehicle positioning method based on image registration | |
WO2018207426A1 (en) | Information processing device, information processing method, and program | |
Dumble et al. | Airborne vision-aided navigation using road intersection features | |
CN114119659A (en) | Multi-sensor fusion target tracking method | |
CN117876723B (en) | Unmanned aerial vehicle aerial image global retrieval positioning method under refusing environment | |
CN108629295A (en) | Corner terrestrial reference identification model training method, the recognition methods of corner terrestrial reference and device | |
CN114238675A (en) | Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching | |
Liu et al. | Eyeloc: Smartphone vision-enabled plug-n-play indoor localization in large shopping malls | |
CN110472092B (en) | Geographical positioning method and system of street view picture | |
CN115861352A (en) | Monocular vision, IMU and laser radar data fusion and edge extraction method | |
CN114046790A (en) | Factor graph double-loop detection method | |
Yao et al. | A magnetic interference detection-based fusion heading estimation method for pedestrian dead reckoning positioning | |
Kim et al. | Robust imaging sonar-based place recognition and localization in underwater environments | |
CN117664124A (en) | Inertial guidance and visual information fusion AGV navigation system and method based on ROS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |