CN111369589A - Unmanned aerial vehicle tracking method based on multi-strategy fusion - Google Patents
Unmanned aerial vehicle tracking method based on multi-strategy fusion Download PDFInfo
- Publication number
- CN111369589A CN111369589A CN202010120410.5A CN202010120410A CN111369589A CN 111369589 A CN111369589 A CN 111369589A CN 202010120410 A CN202010120410 A CN 202010120410A CN 111369589 A CN111369589 A CN 111369589A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- image
- tracking method
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000004927 fusion Effects 0.000 title claims abstract description 22
- 238000001228 spectrum Methods 0.000 claims abstract description 11
- 238000011156 evaluation Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 238000013178 mathematical model Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 8
- 238000013135 deep learning Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 3
- 241001292396 Cirrhitidae Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/12—Target-seeking control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses an unmanned aerial vehicle tracking method based on multi-strategy fusion, which comprises the steps of taking a Centernet network in a deep learning network as a structural main body, combining a frequency spectrum detection module and a steering engine control module, providing a new vision and frequency spectrum combined evaluation algorithm, effectively calculating the position of an unmanned aerial vehicle in a video image, controlling a camera steering engine to rotate through a central point of the position, accurately tracking the unmanned aerial vehicle in flight within a range of 3 kilometers, displaying the unmanned aerial vehicle in flight in a more intuitive vision tracking mode, and solving the problem of difficult tracking of the unmanned aerial vehicle in flight.
Description
Technical Field
The invention relates to the field of image processing, in particular to an unmanned aerial vehicle tracking method based on multi-strategy fusion.
Background
Unmanned aerial vehicles generally refer to a powered, controllable, unmanned aerial vehicle that performs a variety of tasks and is reusable. Compared with a piloted aircraft, the unmanned aerial vehicle has the advantages of light weight, small radar reflection section, low operation cost, high flexibility, no safety problem of crew members and the like, and can be widely used for military tasks such as reconnaissance, attack and the like; in the civil aspect, the method can be used in the fields of meteorological detection, disaster monitoring, geological exploration, map mapping and the like, so that the method is more and more emphasized by more and more countries and is rapidly developed.
The flying speed of the drone is fast and generally has a unique geometry, manifesting as a lack of complete structural information, and thus is difficult to track in flight.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle tracking method based on multi-strategy fusion, and aims to solve the problem that an unmanned aerial vehicle is difficult to track in flight.
In order to achieve the aim, the invention provides an unmanned aerial vehicle tracking method based on multi-strategy fusion,
the method comprises the following steps:
training an unmanned aerial vehicle image sample based on a Centeret network to generate a feature map;
acquiring an unmanned aerial vehicle signal, and analyzing and processing the unmanned aerial vehicle signal to obtain a direction parameter of the unmanned aerial vehicle;
outputting a control signal based on the direction parameter of the unmanned aerial vehicle to control the camera lens to rotate, and acquiring a video image of the unmanned aerial vehicle to obtain an estimated position of the unmanned aerial vehicle;
carrying out image block weighting processing on the acquired video image, and obtaining the specific position of the unmanned aerial vehicle in the video image based on a vision and frequency spectrum combined evaluation algorithm;
and acquiring the central coordinate of the unmanned aerial vehicle based on the opencv function, and tracking the unmanned aerial vehicle in real time.
The method comprises the following specific steps of training an unmanned aerial vehicle image sample based on a Centernet network and generating a feature map, wherein the specific steps are as follows:
acquiring three channels of an unmanned aerial vehicle image input RGB, and outputting a predicted value based on convolutional neural network processing;
and extracting features based on network forward propagation to obtain a feature map.
The method comprises the following steps of obtaining unmanned aerial vehicle signals, analyzing and processing the unmanned aerial vehicle signals to obtain direction parameters of the unmanned aerial vehicle, and comprises the following specific steps:
detecting and acquiring unmanned aerial vehicle signals within a range of 3 kilometers;
performing data fusion based on the use environment;
extracting unmanned aerial vehicle signal parameters through down-conversion, A/D sampling, digital channelization and array signal processing;
carrying out amplitude-phase integrated direction finding processing based on parameters of different antennas, and comparing the processed direction finding processing with a database to obtain the model of the unmanned aerial vehicle;
and positioning based on a multi-station direction-finding cross positioning system to obtain the direction parameters of the unmanned aerial vehicle.
The specific steps of outputting a control signal based on the direction parameters of the unmanned aerial vehicle to control the camera lens to rotate, acquiring the video image of the unmanned aerial vehicle and obtaining the estimated position of the unmanned aerial vehicle are
Reading the output direction parameters of the unmanned aerial vehicle;
calculating the difference between the direction parameter and the image center;
calculating the rotation quantity of the holder through a mathematical model;
controlling the rotation of the holder through a position PD algorithm, wherein the position PD algorithm model is as follows:
wherein S (n) is a control output, Kp is a proportional control parameter, Td is a derivative control parameter, e (n) is a difference between a current state value and a target value, n is a control number,by KdThis means that there are:
S(n)=Kpe(n)+Kd[e(n)-e(n-1)]。
the specific steps of carrying out image block weighting processing on the acquired video image and obtaining the specific position of the unmanned aerial vehicle in the video image based on a vision and frequency spectrum joint evaluation algorithm are as follows:
dividing the image into three image blocks and giving weighting parameters;
extracting key points of each category of the feature map based on the Centeret network;
and carrying out confidence judgment on the key points to obtain specific positions.
Wherein, the specific steps of judging the confidence of the key points and acquiring the specific positions are that,
performing first confidence judgment on an original image acquired by the camera based on an original Centeret algorithm, and if a key point in the feature map is smaller than a threshold value A, amplifying the camera by a fixed multiple;
if the key point in the feature map is larger than the threshold A, performing secondary confidence judgment by using a Centernet algorithm based on a weighted parameter and a feature response value calculation mode;
and if the secondary confidence coefficient judges that the characteristic peak value is larger than the threshold value B, displaying the characteristic peak value by using an opencv function picture frame, and returning the central coordinates (x, y) of the unmanned aerial vehicle in the image.
Wherein, the specific calculation step of the characteristic response value calculation mode is,
the following formula is established:
y(t)=(ω+B(n))x(t)t≥0,n≥0
wherein y (t) represents a characteristic response value, t represents a characteristic point ranking number, x (t) represents a response value of each characteristic point calculated by the original Centernet algorithm, ω represents an image block weight, B (n) represents the accuracy of the camera increased by each magnification, and n represents the number of times of the camera magnification;
B(n)=(1.1)nβn≥0
where β is an initial constant.
According to the unmanned aerial vehicle tracking method based on multi-strategy fusion, the spectrum detection and the visual tracking can be integrated by using the unmanned aerial vehicle tracking method based on multi-strategy fusion, the unmanned aerial vehicle can be accurately positioned by using the visual tracking after being roughly positioned by using the spectrum detection, and meanwhile, the position of the unmanned aerial vehicle in a video is marked in a more intuitive form, so that a user and a monitor of the unmanned aerial vehicle can observe the position of the unmanned aerial vehicle more clearly.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for tracking an unmanned aerial vehicle based on multi-policy fusion according to the present invention;
FIG. 2 is a block diagram of the operation of the unmanned aerial vehicle tracking method based on multi-policy fusion according to the present invention;
fig. 3 is an image block diagram of the unmanned aerial vehicle tracking method based on multi-policy fusion according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the present invention, it is to be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships illustrated in the drawings, and are used merely for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention. Further, in the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Examples
Referring to fig. 1, the method for tracking an unmanned aerial vehicle based on multi-policy fusion according to the present invention includes:
s101, training the unmanned aerial vehicle image sample based on the Centernet network to generate a feature map.
Acquiring three channels of an unmanned aerial vehicle image input RGB, and outputting a predicted value based on convolutional neural network processing;
the 24-bit RGB image is also called a full-color image, and has three channels, namely R (red), G (green) and B (blue), and the RGB image and the gray value are understood by using a halcon program and a halcon self-contained image. The gray values of the three-channel image are a combination of the gray values of the three single channels. The gray scale values are 0-255, each channel is 0-255, the image looks brighter the larger the value, and the image is darker the smaller the value. The darker which color is seen in which part on the three-channel image, the greater which color component is evident in this part, the brighter it reflects on this single channel.
And extracting features based on network forward propagation to obtain a feature map.
The network forward propagation is that a neuron has a plurality of inputs and an output, and the input of each neuron can be the output of other neurons or the input of the whole neural network. The feature map (featuremap) is equivalent to an image formed after the convolutional neural network extracts the features of the original image, from the perspective of the probability size, each type of point in the feature map has its own probability, namely response value, so that a hotspot map (heatmap) is formed, the reaction is the color depth, namely the response value size, and the point with the largest response value is the point where the tracking target is located.
S102, unmanned aerial vehicle signals are obtained, and direction parameters of the unmanned aerial vehicle are obtained after the unmanned aerial vehicle signals are analyzed and processed.
Starting a camera, starting a spectrum detection module, detecting and capturing unmanned aerial vehicle signals within a range of 3 kilometers by the spectrum detection module having ownership of a steering engine control module, and performing data fusion on signal data according to a use environment; intercepting a remote controller signal sent by a ground remote controller and an image transmission signal downloaded by an unmanned aerial vehicle, and extracting signal parameters through down-conversion, A/D sampling, digital channelization and array signal processing; carrying out amplitude-phase integrated direction finding processing according to parameters of different antennas, and comparing the actual measured parameters with a database to obtain the model of the unmanned aerial vehicle; and positioning based on a multi-station direction-finding cross positioning system to obtain direction parameters of the unmanned aerial vehicle.
S103, outputting a control signal based on the direction parameter of the unmanned aerial vehicle to control the camera lens to rotate, and acquiring a video image of the unmanned aerial vehicle to obtain the estimated position of the unmanned aerial vehicle.
Reading output direction parameters of the unmanned aerial vehicle, such as the direction and angle of the unmanned aerial vehicle relative to the camera; calculating the difference between the direction parameter and the image center; calculating the rotation quantity of the holder through a mathematical model; controlling the rotation of the holder through a position PD algorithm, wherein the position PD algorithm model is as follows:
S(n)=Kpe(n)+Kd[e(n)-e(n-1)]
where Kp is a proportional control parameter, Td is a derivative control parameter, and e (n) is a difference between the current state value and the target value.
The central position of the camera image is the estimated position of the unmanned aerial vehicle.
S104, carrying out image block weighting processing on the acquired video image, and obtaining the specific position of the unmanned aerial vehicle in the video image based on a vision and frequency spectrum combined evaluation algorithm.
Taking (960,540) as the image center point of the acquired 1920 × 1080 size video image, as shown in fig. 3, the whole image is divided into 3 image blocks, namely image _ inside, image _ middle and image _ outside. image _ inside represents the very middle part of the image, image _ inside tiles should have a greater probability of having drone targets, while image _ outside is far from the center of the image should have a lesser probability of having drone targets. Let the parameters of image _ inside, image _ middle, and image _ outside be ω1,ω2,ω3Then ω is2=0.8ω1,ω3=0.4ω1。
The centrenetet detector compares all feature points on the feature map with its connected 8 nearby points and retains if the point response value is greater than or equal to its eight nearby point values, resulting in all the top 100 key points that meet the requirement.
Order toIs a set of n key points of the c categories detected by the above method, each key point being in integer coordinates (x)i,yi) Is given in a form ofThe following detection boxes are generated:
All detection blocks constitute a rough block diagram, and then a threshold value is set,a discard that is less than the threshold value,the block diagram above the threshold forms the final drone block diagram.
Referring to fig. 2, a Centernet algorithm is used to perform a first confidence judgment on an original image acquired by a camera, if a key point in a feature map is smaller than a threshold a, the camera is magnified by a fixed multiple, n is increased by 1, and b (n) is updated; and if the key point in the feature map is larger than the threshold A, performing second confidence judgment, introducing image block weight into the video acquired by the camera, and performing the confidence judgment again by using a Centeret algorithm by adopting a feature response value calculation mode. And finally, if the characteristic peak value is larger than the threshold value B through the second confidence judgment, determining a tracking target, displaying the tracking target by using an opencv function picture frame, returning the central coordinates (x, y) of the unmanned aerial vehicle in the image, wherein the visual tracking module has the steering engine control right again at the moment, and controlling the steering engine to rotate again through the (x, y).
On the basis of introducing the weight of the image block, the algorithm designs a new characteristic response value calculation mode, which comprises the following steps:
y(t)=(ω+B(n))x(t)t≥0,n≥0
where t represents the feature point rank number, x (t) represents the response value of each feature point calculated by the original centeret algorithm, and y (t) represents the response value of each final feature point. ω represents the image block weight, and is divided into 3 different values, ω, according to the 3 different regions, image _ inside, image _ middle, and image _ outside1,ω2,ω3Wherein ω is2=0.8ω1,ω3=0.4ω1. B (n) is a linearly increasing function representing the resolution of the image at each magnification (fixed multiple) of the camera, CenternetThe higher the accuracy of the tracking algorithm, where n represents the number of camera magnifications, the equation is as follows:
B(n)=(1.1)nβn≥0
where β is an initial constant.
S105, acquiring the center coordinate of the unmanned aerial vehicle based on the opencv function, and tracking the unmanned aerial vehicle in real time.
After the position of the unmanned aerial vehicle in the video image is determined, a rectangular frame where the unmanned aerial vehicle is located is drawn by using an opencv function, the central coordinates (x, y) of the rectangular frame are calculated and returned, the steering engine control ownership is handed to the visual tracking module, the steering engine is controlled to rotate again through the central coordinates, and the unmanned aerial vehicle is tracked in real time.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (7)
1. An unmanned aerial vehicle tracking method based on multi-strategy fusion is characterized in that,
the method comprises the following steps:
training an unmanned aerial vehicle image sample based on a Centeret network to generate a feature map;
acquiring an unmanned aerial vehicle signal, and analyzing and processing the unmanned aerial vehicle signal to obtain a direction parameter of the unmanned aerial vehicle;
outputting a control signal based on the direction parameter of the unmanned aerial vehicle to control the camera lens to rotate, and acquiring a video image of the unmanned aerial vehicle to obtain an estimated position of the unmanned aerial vehicle;
carrying out image block weighting processing on the acquired video image, and obtaining the specific position of the unmanned aerial vehicle in the video image based on a vision and frequency spectrum combined evaluation algorithm;
and acquiring the central coordinate of the unmanned aerial vehicle based on the opencv function, and tracking the unmanned aerial vehicle in real time.
2. The unmanned aerial vehicle tracking method based on multi-strategy fusion as claimed in claim 1, wherein the specific steps of training the unmanned aerial vehicle image sample based on the centret network and generating the feature map are as follows:
acquiring three channels of an unmanned aerial vehicle image input RGB, and outputting a predicted value based on convolutional neural network processing;
and extracting features based on network forward propagation to obtain a feature map.
3. The unmanned aerial vehicle tracking method based on multi-strategy fusion of claim 1, wherein the specific steps of obtaining the unmanned aerial vehicle signal, analyzing and processing the unmanned aerial vehicle signal to obtain the direction parameter of the unmanned aerial vehicle are as follows:
detecting and acquiring unmanned aerial vehicle signals within a range of 3 kilometers;
performing data fusion based on the use environment;
extracting unmanned aerial vehicle signal parameters through down-conversion, A/D sampling, digital channelization and array signal processing;
carrying out amplitude-phase integrated direction finding processing based on parameters of different antennas, and comparing the processed direction finding processing with a database to obtain the model of the unmanned aerial vehicle;
and positioning based on a multi-station direction-finding cross positioning system to obtain the direction parameters of the unmanned aerial vehicle.
4. The unmanned aerial vehicle tracking method based on multi-strategy fusion as claimed in claim 1, wherein the specific steps of outputting a control signal based on the directional parameter of the unmanned aerial vehicle to control the camera lens to rotate, acquiring the video image of the unmanned aerial vehicle, and obtaining the estimated position of the unmanned aerial vehicle are as follows:
reading the direction parameters of the unmanned aerial vehicle;
calculating the difference between the direction parameter and the image center;
calculating the rotation quantity of the holder through a mathematical model;
controlling the rotation of the holder through a position PD algorithm, wherein the position PD algorithm model is as follows:
wherein S (n) is a control output, Kp is a proportional control parameter, Td is a derivative control parameter, e (n) is a difference between a current state value and a target value, n is a control number,by KdThis means that there are:
S(n)=Kpe(n)+Kd[e(n)-e(n-1)]。
5. the unmanned aerial vehicle tracking method based on multi-strategy fusion of claim 1, wherein the specific steps of performing image block weighting processing on the acquired video image and obtaining the specific position of the unmanned aerial vehicle in the video image based on a vision and spectrum joint evaluation algorithm are as follows:
dividing the image into three image blocks and giving weighting parameters;
extracting key points of each category of the feature map based on the Centeret network;
and carrying out confidence judgment on the key points to obtain specific positions.
6. The unmanned aerial vehicle tracking method based on multi-strategy fusion as claimed in claim 5, wherein the specific step of performing confidence level judgment on the key points and obtaining specific positions is,
performing first confidence judgment on an original image acquired by the camera based on a Centeret algorithm, and if a key point in the feature map is smaller than a threshold value A, amplifying the camera by a fixed multiple;
if the key point in the feature map is larger than the threshold A, performing secondary confidence judgment by using a Centernet algorithm based on a weighted parameter and a feature response value calculation mode;
and if the secondary confidence coefficient judges that the characteristic peak value is larger than the threshold value B, displaying the characteristic peak value by using an opencv function picture frame, and returning the central coordinates (x, y) of the unmanned aerial vehicle in the image.
7. The unmanned aerial vehicle tracking method based on multi-strategy fusion of claim 6, wherein the calculation formula of the characteristic response value calculation mode is,
y(t)=(ω+B(n))x(t)t≥0,n≥0
wherein y (t) represents a characteristic response value, t represents a characteristic point ranking number, x (t) represents a response value of each characteristic point calculated by the original Centernet algorithm, ω represents an image block weight, B (n) represents the accuracy of the camera increased by each magnification, and n represents the number of times of the camera magnification;
B(n)=(1.1)nβn≥0
where β is an initial constant.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010120410.5A CN111369589B (en) | 2020-02-26 | 2020-02-26 | Unmanned aerial vehicle tracking method based on multi-strategy fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010120410.5A CN111369589B (en) | 2020-02-26 | 2020-02-26 | Unmanned aerial vehicle tracking method based on multi-strategy fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111369589A true CN111369589A (en) | 2020-07-03 |
CN111369589B CN111369589B (en) | 2022-04-22 |
Family
ID=71211009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010120410.5A Active CN111369589B (en) | 2020-02-26 | 2020-02-26 | Unmanned aerial vehicle tracking method based on multi-strategy fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111369589B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016141100A2 (en) * | 2015-03-03 | 2016-09-09 | Prenav Inc. | Scanning environments and tracking unmanned aerial vehicles |
CN109099779A (en) * | 2018-08-31 | 2018-12-28 | 江苏域盾成鹫科技装备制造有限公司 | A kind of detecting of unmanned plane and intelligent intercept system |
CN109283491A (en) * | 2018-08-02 | 2019-01-29 | 哈尔滨工程大学 | A kind of unmanned plane positioning system based on vector probe unit |
CN109816695A (en) * | 2019-01-31 | 2019-05-28 | 中国人民解放军国防科技大学 | Target detection and tracking method for infrared small unmanned aerial vehicle under complex background |
CN110133573A (en) * | 2019-04-23 | 2019-08-16 | 四川九洲电器集团有限责任公司 | A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information |
CN110262529A (en) * | 2019-06-13 | 2019-09-20 | 桂林电子科技大学 | A kind of monitoring unmanned method and system based on convolutional neural networks |
CN110398720A (en) * | 2019-08-21 | 2019-11-01 | 深圳耐杰电子技术有限公司 | A kind of anti-unmanned plane detection tracking interference system and photoelectric follow-up working method |
CN110647931A (en) * | 2019-09-20 | 2020-01-03 | 深圳市网心科技有限公司 | Object detection method, electronic device, system, and medium |
-
2020
- 2020-02-26 CN CN202010120410.5A patent/CN111369589B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016141100A2 (en) * | 2015-03-03 | 2016-09-09 | Prenav Inc. | Scanning environments and tracking unmanned aerial vehicles |
CN109283491A (en) * | 2018-08-02 | 2019-01-29 | 哈尔滨工程大学 | A kind of unmanned plane positioning system based on vector probe unit |
CN109099779A (en) * | 2018-08-31 | 2018-12-28 | 江苏域盾成鹫科技装备制造有限公司 | A kind of detecting of unmanned plane and intelligent intercept system |
CN109816695A (en) * | 2019-01-31 | 2019-05-28 | 中国人民解放军国防科技大学 | Target detection and tracking method for infrared small unmanned aerial vehicle under complex background |
CN110133573A (en) * | 2019-04-23 | 2019-08-16 | 四川九洲电器集团有限责任公司 | A kind of autonomous low latitude unmanned plane system of defense based on the fusion of multielement bar information |
CN110262529A (en) * | 2019-06-13 | 2019-09-20 | 桂林电子科技大学 | A kind of monitoring unmanned method and system based on convolutional neural networks |
CN110398720A (en) * | 2019-08-21 | 2019-11-01 | 深圳耐杰电子技术有限公司 | A kind of anti-unmanned plane detection tracking interference system and photoelectric follow-up working method |
CN110647931A (en) * | 2019-09-20 | 2020-01-03 | 深圳市网心科技有限公司 | Object detection method, electronic device, system, and medium |
Non-Patent Citations (1)
Title |
---|
王靖宇等: "基于深度神经网络的低空弱小无人机目标检测研究", 《西北工业大学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111369589B (en) | 2022-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11915502B2 (en) | Systems and methods for depth map sampling | |
CN110988912B (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
CN111326023B (en) | Unmanned aerial vehicle route early warning method, device, equipment and storage medium | |
EP2771842B1 (en) | Identification and analysis of aircraft landing sites | |
CN110866887A (en) | Target situation fusion sensing method and system based on multiple sensors | |
Häselich et al. | Probabilistic terrain classification in unstructured environments | |
CN113050074B (en) | Camera and laser radar calibration system and calibration method in unmanned environment perception | |
CN110006444B (en) | Anti-interference visual odometer construction method based on optimized Gaussian mixture model | |
CN109697428B (en) | Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network | |
CN114217303B (en) | Target positioning and tracking method and device, underwater robot and storage medium | |
CN112823321A (en) | Position positioning system and method for mixing position identification results based on multiple types of sensors | |
Ivanovas et al. | Block matching based obstacle avoidance for unmanned aerial vehicle | |
RU2513900C1 (en) | Method and device to determine object coordinates | |
CN110673627A (en) | Forest unmanned aerial vehicle searching method | |
Hartley et al. | Using roads for autonomous air vehicle guidance | |
CN118279770B (en) | Unmanned aerial vehicle follow-up shooting method based on SLAM algorithm | |
Vitiello et al. | Detection and tracking of non-cooperative flying obstacles using low SWaP radar and optical sensors: an experimental analysis | |
CN111369589B (en) | Unmanned aerial vehicle tracking method based on multi-strategy fusion | |
CN114581346A (en) | Multispectral image fusion method for urban low-altitude remote sensing monitoring target | |
RU2787946C1 (en) | Method for manufacturing a multilayer coil heat exchanger | |
CN114322943B (en) | Target distance measuring method and device based on forward-looking image of unmanned aerial vehicle | |
Wang et al. | Probability map based aerial target detection and localisation using networked cameras | |
CN118015089A (en) | Geological exploration visual calibration method and system based on multidimensional sensing data | |
CN118710939A (en) | Autonomous positioning method for aerial photographing target in refused state based on multi-mode image matching | |
CN115880496A (en) | Point cloud information identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20200703 Assignee: Guangxi Huantai Aerospace Technology Co.,Ltd. Assignor: GUILIN University OF ELECTRONIC TECHNOLOGY Contract record no.: X2022450000392 Denomination of invention: A tracking method of uav based on multi strategy fusion Granted publication date: 20220422 License type: Common License Record date: 20221226 |
|
EE01 | Entry into force of recordation of patent licensing contract |