[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109255286B - Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework - Google Patents

Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework Download PDF

Info

Publication number
CN109255286B
CN109255286B CN201810807503.8A CN201810807503A CN109255286B CN 109255286 B CN109255286 B CN 109255286B CN 201810807503 A CN201810807503 A CN 201810807503A CN 109255286 B CN109255286 B CN 109255286B
Authority
CN
China
Prior art keywords
unmanned aerial
detection
aerial vehicle
network framework
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810807503.8A
Other languages
Chinese (zh)
Other versions
CN109255286A (en
Inventor
智喜洋
俞利健
胡建明
巩晋南
江世凯
陈文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201810807503.8A priority Critical patent/CN109255286B/en
Publication of CN109255286A publication Critical patent/CN109255286A/en
Application granted granted Critical
Publication of CN109255286B publication Critical patent/CN109255286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle optical rapid detection and identification method based on a deep learning network framework, which comprises the following steps: the method comprises the following steps: aiming at five main unmanned aerial vehicles in the current market, carrying out flight tests to acquire optical imaging experiment data, and processing the acquired optical imaging experiment data according to a standard VOC data format; step two: building a YOLO network framework, improving the YOLO network framework by using a residual error network module, and training the improved YOLO network framework to obtain a detection and recognition model; step three: and D, selecting real-shooting flight optical imaging experimental data containing five unmanned aerial vehicles, and detecting and identifying by using the detection and identification model obtained in the step two. The method avoids the problems of complex artificial modeling, poor applicability and the like of the unmanned aerial vehicle and the complex background characteristics, and can greatly improve the speed and the accuracy of the detection and the identification of the moving target under the complex background.

Description

Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework
Technical Field
The invention belongs to the technical field of image processing, relates to an unmanned aerial vehicle detection and identification method based on optical imaging, and particularly relates to an unmanned aerial vehicle optical rapid detection and identification method based on a YOLO (You Only Look at one) deep learning network framework.
Background
In recent years, the unmanned aerial vehicle technology is rapidly developed, and the unmanned aerial vehicle technology is widely applied to various military and civil fields such as aerial photography, agricultural plant protection, traffic monitoring, disaster area detection, power line patrol, rapid reconnaissance, assault and the like. However, the flooding of the market of unmanned aerial vehicles also brings various potential safety hazards, and especially for unmanned aerial vehicles which do not comply with the regulation and fly beyond the regulated airspace, great threats are caused to the safety of airport civil airliners and the security around key areas. Therefore, the monitoring and safety defense measures of the aerial unmanned aerial vehicle in key areas around important facilities, crowd gathering places, important military areas and the like in China are urgently needed to be developed.
At present, radar and GPS signals are mainly used for aerial surveillance and defense of unmanned aerial vehicles, for example, ThaleSA (law), DroneShield (america) and JAMMER (china) companies use radar to monitor unmanned aerial vehicles, and a portable unmanned aerial vehicle interference unit invented by the electronic technology for flood after Shanghai enables the unmanned aerial vehicles to fall or return to the air through suppression of unmanned aerial vehicle remote control signals and GPS positioning signals. However, the method based on radar or GPS signals is mainly used for remote rapid discovery, capture and tracking of the unmanned aerial vehicle, and it is still difficult to realize efficient and high-accuracy detection and identification of the type of the unmanned aerial vehicle at present. And adopt the optics means to detect the discernment to unmanned aerial vehicle and have following advantage: 1) compared with a radar, optical imaging can acquire richer target detail information such as gray scale, texture and structure, and is more suitable for high-accuracy identification of the type of the unmanned aerial vehicle; 2) compared with the GPS, the optical means belongs to a passive mode and does not need a target to actively receive signals. Unmanned aerial vehicle detection and identification based on optical imaging is an important development trend. The traditional optical detection and identification method is to construct features manually and then select a proper classifier, and the method has good performance under the conditions of simple background and relatively stable target invariant features. And unmanned aerial vehicle is at the air flight in-process, and its relative position, gesture constantly change, are difficult to find the image characteristic who satisfies scale invariance, angle invariance, and the background of unmanned aerial vehicle flight in-process is the motion equally moreover, and is probably complicated changeable.
Disclosure of Invention
The invention provides an unmanned aerial vehicle optical rapid detection and identification method based on a YOLO deep learning network framework, aiming at the problems that high-accuracy identification of unmanned aerial vehicle categories is difficult to realize by means of radar, GPS and the like, the traditional optical characteristic manual construction process is complicated, the generalization capability is not strong, and the like. The method can utilize actual flight test data, learn a training network and directly output the recognition result, compared with the traditional detection recognition process, the problems of complexity, poor applicability and the like of artificial modeling of the unmanned aerial vehicle and the complex background characteristics are avoided, and the speed and the accuracy of moving target detection recognition under the complex background can be greatly improved.
The purpose of the invention is realized by the following technical scheme:
an unmanned aerial vehicle optical rapid detection and identification method based on a YOLO deep learning network framework comprises the following steps:
the method comprises the following steps: aiming at five main unmanned aerial vehicles in the current market, carrying out flight tests to acquire optical imaging experiment data, and processing the acquired optical imaging experiment data according to a standard VOC data format;
step two: building a YOLO network framework, improving the YOLO network framework by using a residual error network module, and training the improved YOLO network framework to obtain a detection and recognition model;
step three: and D, selecting real-shooting flight optical imaging experimental data containing five unmanned aerial vehicles, and detecting and identifying by using the detection and identification model obtained in the step two.
Compared with the prior art, the invention has the following advantages:
(1) the invention provides an unmanned aerial vehicle optical rapid and autonomous detection and identification method based on a YOLO deep learning network framework, which takes the characteristics construction and classification into a whole, namely, the characteristics are input into original data, the classification result is directly output, the characteristics are not required to be constructed manually, and the method is more suitable for solving the problem of automatic detection and identification of moving targets under a complex moving background.
(2) The method is suitable for rapid discovery and high-accuracy detection and identification of unmanned aerial vehicle intrusion in various complex application scenes such as airports, stadiums, concerts, peripheries of important military regions and the like, and provides support for effective supervision and air defense of the unmanned aerial vehicle.
(3) Aiming at the problems that the relative position and the posture of an unmanned aerial vehicle are constantly changed in the air flight process, the image characteristics meeting the scale invariance and the angle invariance are difficult to find, the background of the unmanned aerial vehicle in the flight process is usually complex and changeable, and the like, the invention realizes the automatic and abstract expression of target characteristics under complex conditions by utilizing the functions of different network layers (a convolution layer, a pooling layer, a regression layer and the like) and combining with special network configuration design based on the deep learning theory and automatically finishes the target classification. Compared with the traditional target detection and identification process, the method can greatly improve the speed and accuracy of moving target detection and identification under the complex background.
(4) In order to realize the rapid detection and identification of the unmanned aerial vehicle, a YOLO network framework is selected, the network treats a detection task as a regression problem, the coordinates of a boundary box, the confidence coefficient and the class probability of an object are directly obtained through all pixels of the whole image, and the detection speed is obviously superior to that of deep learning network frameworks such as R-CNN and Fast R-CNN. And the residual error network module is utilized to improve the YOLO network, so that the possibility of gradient explosion or gradient disappearance in the training process is reduced, the available probability of a training model is effectively improved, and the method has important significance for the practical application of the unmanned aerial vehicle rapid detection and identification method based on the YOLO deep learning network framework.
Drawings
FIG. 1 is a flow chart of the unmanned aerial vehicle optical rapid detection and identification method based on the YOLO deep learning network framework;
FIG. 2 is an example of a VOC standard data format;
FIG. 3 is a diagram of a network architecture;
FIG. 4 is a sample image in a database;
fig. 5 shows the actual image test results.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings, but not limited thereto, and any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention shall be covered by the protection scope of the present invention.
The invention provides an unmanned aerial vehicle optical rapid detection and identification method based on a YOLO deep learning network framework, which comprises the following steps as shown in figure 1:
the method comprises the following steps: the method aims at five main unmanned aerial vehicles in the current market to carry out flight tests to acquire optical imaging experimental data and process the acquired optical imaging experimental data according to a standard VOC data format. The method comprises the following specific steps:
selecting Xinjiang M100, Phantom-3, Inspire-1, an agricultural plant protection unmanned aerial vehicle and a police unmanned aerial vehicle to carry out flight test to acquire experimental data, and carrying out operations such as labeling on the experimental data to enable the experimental data to accord with a VOC data format, wherein the specific format after treatment is shown in figure 2.
Step two: and building a YOLO network framework, improving the YOLO network framework by using a residual error network module, and training the improved YOLO network framework to obtain a detection and recognition model. The method comprises the following specific steps:
the detection method of YOLO is a first-order detection method that divides the input image into S × S lattices, each lattice being responsible for detecting objects "falling" into the lattice. If the coordinates of the center position of an object fall into a certain grid, the grid is responsible for detecting the object. The output information of each cell contains two major parts, B bounding box information containing rectangular area information of objects, and C probability information that objects belong to a certain category.
The Bounding box information contains 5 values, x, y, w, h, and confidence. Where x, y refers to an offset value of the center position of the bounding box of the object predicted by the current grid with respect to the current grid position, and is normalized to the coordinates of [0,1 ]. w, h are the width and height of the bounding box and are normalized to [0,1] using the width and height of the image.
The confidence reflects whether the current bounding box contains the object and the accuracy of the object position, and the calculation method is as follows:
confidence=P(object)·IOU。
when p (object) is 1, it indicates that the bounding box contains an object (target object), and if not, p (object) is 0; IOU (intersection ratio) is the intersection area of the predicted bounding box and the real area of the object, the area is calculated by pixel area, and the result is normalized to the [0,1] interval by the real area pixel area.
A YOLO network is constructed (the network structure is shown in fig. 3 (a)), and the network includes 24 convolutional layers for extracting image features and 2 fully-connected layers for predicting target positions and class probabilities. The loss function of the network is defined as follows:
Figure GDA0003038013070000061
the meaning of each parameter of the formula is as follows: s2The number of the image divided into grids is represented, and the value is 13 multiplied by 13; b represents the number of anchor point frames in each grid, and the value is 5; x, y, w and h are the central coordinates and the width and the height of the predicted bounding box;
Figure GDA0003038013070000062
representing the center coordinates and width and height of the actual bounding box; c represents the confidence of the target contained in the prediction bounding box;
Figure GDA0003038013070000063
representing the intersection ratio of the actual target and the actual target bounding box; p (c) represents the prediction probability of belonging to a certain class;
Figure GDA0003038013070000064
representing the actual probability of belonging to a certain category, if the actual probability belongs to the category, the value is 1, otherwise, the value is 0; i isobjWhether the anchor point frame contains an object or not is represented, if the anchor point frame contains the object, the anchor point frame is 1, and if the anchor point frame does not contain the object, the anchor point frame is 0; lambda [ alpha ]coordRepresenting the position prediction loss weight, and taking the value as 3; lambda [ alpha ]noobjThe weight of the confidence coefficient of the target when the target is not contained is represented, and the value is 0.7.
The first two terms of the function are coordinate prediction loss, the third term is confidence prediction loss of a box containing an object, the fourth term is confidence prediction loss not containing an object, and the fifth term is category prediction loss.
The activation function in the network is defined as follows:
Figure GDA0003038013070000071
in order to avoid gradient explosion or gradient disappearance during training, the invention improves the network by using a residual module, and the improved network structure is shown in fig. 3(b), and the specific positions are as follows: the 3 rd pooled layer output split is combined with the 9 th convolutional layer output, the 12 th convolutional layer output split is combined with the 15 th convolutional layer output, the 4 th pooled layer output split is combined with the 18 th convolutional layer output, and the 19 th convolutional layer output split is combined with the 22 th convolutional layer output. After 4 short circuit connections are added to form a residual error unit, gradient explosion or gradient disappearance during training is effectively relieved. The propagation of the partial gradient of the residual unit at 4 is as follows:
loss=F(xi,Wi)
Figure GDA0003038013070000072
where loss represents the loss function, xi、WiFor the network layer i inputs and layer i weights, the loss function is expressed as a function of the inputs and weights F (x)i,Wi),xLRepresenting residual module split layer output, xlRepresenting the combined output of residual modules, first factor
Figure GDA0003038013070000073
Representing the gradient of the loss function to the L layer, a 1 in the parenthesis indicates that the short-circuit mechanism can propagate the gradient without loss, while the other residual gradient needs to pass through the weighted layer.
And (3) carrying out hyper-parameter setting on the network, setting the initial learning rate to be 0.0003, and training the network by adopting a random gradient descent method (namely, the number of samples used for updating each time is 1, and the total updating times is 50000) to obtain a detection and identification model.
Step three: selecting real-shooting flight optical imaging experimental data containing five types of unmanned aerial vehicles, inputting an image to be detected by using the detection identification model obtained in the step two, extracting features through the convolution layer, reducing the size of the image through the pooling layer, and finally outputting a target position predicted value and a target category probability predicted value through the full-connection layer, wherein the maximum value of the target category probability corresponds to the category which is the identification result. As can be seen from the statistical results of the accuracy of the detection and identification of each category shown in fig. 5: the M100 is 91.96%, the Inspire-1 is 91.74%, the Phantom-3 is 89.78%, the agricultural unmanned aerial vehicle is 94.13%, and the police unmanned aerial vehicle is 89.84%, so that the high detection and identification accuracy is achieved, the processing speed of each frame can reach the millisecond level under the condition of GPU acceleration, and the rapid detection and identification are achieved.

Claims (4)

1. An unmanned aerial vehicle optical rapid detection and identification method based on a deep learning network framework is characterized by comprising the following steps:
the method comprises the following steps: acquiring optical imaging experiment data aiming at five unmanned aerial vehicles for carrying out flight experiments, and processing the acquired optical imaging experiment data according to a standard VOC data format;
step two: building a YOLO network framework, improving the YOLO network framework by using a residual error network module, training the improved YOLO network framework to obtain a detection and recognition model, and specifically comprising the following steps:
(1) YOLO divides an input image into 13 × 13 lattices, each lattice is responsible for detecting an object "falling" into the lattice, and output information of each lattice includes two major parts, namely 5 bounding box information including rectangular region information of the object and probability information that C objects belong to a certain category;
(2) building a YOLO network, wherein the network comprises 24 convolutional layers and 2 full-connection layers, the convolutional layers are used for extracting image characteristics, and the full-connection layers are used for predicting a target position and a class probability;
(3) the YOLO network is improved by utilizing a residual module, 4 short-circuit connections are added to form a residual unit, and the gradient propagation mode of the residual unit at 4 positions is as follows:
loss=F(xi,Wi)
Figure FDA0003059090520000011
where loss represents the loss function, xi、WiFor the network layer i inputs and layer i weights, the loss function is expressed as a function of the inputs and weights F (x)i,Wi),xLRepresenting residual module split layer output, xlRepresenting the combined output of residual modules, first factor
Figure FDA0003059090520000012
Represents the gradient of the loss function to the L layer;
(4) carrying out hyper-parameter setting on the improved YOLO network in the step (3), setting the initial learning rate to be 0.0003, and training the network by adopting a random gradient descent method to obtain a detection recognition model;
step three: and D, selecting real-shooting flight optical imaging experimental data containing five unmanned aerial vehicles, and detecting and identifying by using the detection and identification model obtained in the step two.
2. The method for rapidly detecting and identifying the optics of the unmanned aerial vehicle based on the deep learning network framework as claimed in claim 1, wherein in the step (3), specific positions of the short-circuit connection at 4 are as follows: the 3 rd pooled layer output split is combined with the 9 th convolutional layer output, the 12 th convolutional layer output split is combined with the 15 th convolutional layer output, the 4 th pooled layer output split is combined with the 18 th convolutional layer output, and the 19 th convolutional layer output split is combined with the 22 th convolutional layer output.
3. The method for identifying the unmanned aerial vehicle based on the deep learning network framework for optical rapid detection of the unmanned aerial vehicle as claimed in claim 1, wherein in the step (4), the total number of updates of the random gradient descent method training network is 50000.
4. The method for the optical rapid detection and identification of the unmanned aerial vehicle based on the deep learning network framework as claimed in claim 1, wherein in the third step, the method for performing detection and identification by using the detection and identification model obtained in the second step is as follows: inputting an image to be detected, extracting features through a convolutional layer, reducing the size of the image through a pooling layer, and finally outputting a target position predicted value and a target category probability predicted value through a full-connection layer, wherein the maximum value of the target category probability corresponds to a category which is a recognition result.
CN201810807503.8A 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework Active CN109255286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810807503.8A CN109255286B (en) 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810807503.8A CN109255286B (en) 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework

Publications (2)

Publication Number Publication Date
CN109255286A CN109255286A (en) 2019-01-22
CN109255286B true CN109255286B (en) 2021-08-24

Family

ID=65049063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810807503.8A Active CN109255286B (en) 2018-07-21 2018-07-21 Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework

Country Status (1)

Country Link
CN (1) CN109255286B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977840A (en) * 2019-03-20 2019-07-05 四川川大智胜软件股份有限公司 A kind of airport scene monitoring method based on deep learning
CN110490155B (en) * 2019-08-23 2022-05-17 电子科技大学 Method for detecting unmanned aerial vehicle in no-fly airspace
CN110850897B (en) * 2019-11-13 2023-06-13 中国人民解放军空军工程大学 Deep neural network-oriented small unmanned aerial vehicle pose data acquisition method
CN111611918B (en) * 2020-05-20 2023-07-21 重庆大学 Traffic flow data set acquisition and construction method based on aerial data and deep learning
CN111723690B (en) * 2020-06-03 2023-10-20 北京全路通信信号研究设计院集团有限公司 Method and system for monitoring state of circuit equipment
CN111797940A (en) * 2020-07-20 2020-10-20 中国科学院长春光学精密机械与物理研究所 Image identification method based on ocean search and rescue and related device
CN112329768A (en) * 2020-10-23 2021-02-05 上善智城(苏州)信息科技有限公司 Improved YOLO-based method for identifying fuel-discharging stop sign of gas station
CN112668445A (en) * 2020-12-24 2021-04-16 南京泓图人工智能技术研究院有限公司 Vegetable type detection and identification method based on yolov5
CN112699810B (en) * 2020-12-31 2024-04-09 中国电子科技集团公司信息科学研究院 Method and device for improving character recognition precision of indoor monitoring system
CN113822372A (en) * 2021-10-20 2021-12-21 中国民航大学 Unmanned aerial vehicle detection method based on YOLOv5 neural network
CN113822375B (en) * 2021-11-08 2024-04-26 北京工业大学 Improved traffic image target detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017077348A1 (en) * 2015-11-06 2017-05-11 Squarehead Technology As Uav detection
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017077348A1 (en) * 2015-11-06 2017-05-11 Squarehead Technology As Uav detection
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Research on UAV target recognition algorithm based on transfer learning SAE;Xie, B等;《Infrared and Laser Engineering》;20180630;第47卷(第6期);全文 *
基于深度学习的无人机识别算法研究;蒋兆军等;《电子技术应用》;20170731;全文 *
基于深度神经网络的低空弱小无人机目标检测研究;王靖宇等;《西北工业大学学报》;20180430;全文 *

Also Published As

Publication number Publication date
CN109255286A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN109255286B (en) Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework
Dong et al. UAV-based real-time survivor detection system in post-disaster search and rescue operations
Jiao et al. A deep learning based forest fire detection approach using UAV and YOLOv3
Hu et al. Object detection of UAV for anti-UAV based on improved YOLO v3
Wang et al. A deep-learning-based sea search and rescue algorithm by UAV remote sensing
WO2021043112A1 (en) Image classification method and apparatus
CN110189304B (en) Optical remote sensing image target on-line rapid detection method based on artificial intelligence
Gallego et al. Detection of bodies in maritime rescue operations using unmanned aerial vehicles with multispectral cameras
CN112068111A (en) Unmanned aerial vehicle target detection method based on multi-sensor information fusion
CN109816695A (en) Target detection and tracking method for infrared small unmanned aerial vehicle under complex background
Dong et al. Real-time survivor detection in UAV thermal imagery based on deep learning
Kiran et al. Weapon Detection using Artificial Intelligence and Deep Learning for Security Applications
Slyusar et al. Improving the model of object detection on aerial photographs and video in unmanned aerial systems
Song et al. PDD: Post-Disaster Dataset for Human Detection and Performance Evaluation
Niu et al. UAV detection based on improved YOLOv4 object detection model
CN115331127A (en) Unmanned aerial vehicle moving target detection method based on attention mechanism
Wu et al. Research on asphalt pavement disease detection based on improved YOLOv5s
Tiwari et al. Detection of Camouflaged Drones using Computer Vision and Deep Learning Techniques
Arif et al. Automatic Detection of Weapons in Surveillance Cameras Using Efficient-Net.
Cheng et al. Anti-UAV Detection Method Based on Local-Global Feature Focusing Module
CN112818837A (en) Aerial photography vehicle weight recognition method based on attitude correction and difficult sample perception
Azad et al. Air-to-Air Simulated Drone Dataset for AI-powered problems
Chen et al. Design of Simulation Experimental Platform for Airport Bird Control Linkage System Based on Improved YOLOv5
Bie et al. UAV recognition and tracking method based on YOLOv5
Mylvaganam et al. Deep learning for arbitrary-shaped water pooling region detection on aerial images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhi Xiyang

Inventor after: Yu Lijian

Inventor after: Hu Jianming

Inventor after: Gong Jinnan

Inventor after: Jiang Shikai

Inventor after: Chen Wenbin

Inventor before: Zhi Xiyang

Inventor before: Yu Lijian

Inventor before: Gong Jinnan

Inventor before: Jiang Shikai

Inventor before: Chen Wenbin

Inventor before: Hu Jianming

GR01 Patent grant
GR01 Patent grant