[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113822372A - Unmanned aerial vehicle detection method based on YOLOv5 neural network - Google Patents

Unmanned aerial vehicle detection method based on YOLOv5 neural network Download PDF

Info

Publication number
CN113822372A
CN113822372A CN202111220550.0A CN202111220550A CN113822372A CN 113822372 A CN113822372 A CN 113822372A CN 202111220550 A CN202111220550 A CN 202111220550A CN 113822372 A CN113822372 A CN 113822372A
Authority
CN
China
Prior art keywords
neural network
uav
feature map
detection method
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111220550.0A
Other languages
Chinese (zh)
Inventor
屈景怡
毕新杰
刘闪亮
李云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation University of China
Original Assignee
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation University of China filed Critical Civil Aviation University of China
Priority to CN202111220550.0A priority Critical patent/CN113822372A/en
Publication of CN113822372A publication Critical patent/CN113822372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种基于YOLOv5神经网络的无人机检测方法,包括以下步骤:1)、获取无人机相关图片,进行图像预处理,得到无人机数据集;2)、将无人机数据集输入到带有Focus层的BottleneckCSP主干网络,得到多个不同尺寸的特征图;3)、多个不同尺寸的特征图,先经过池化处理,并利用FPN和PAN结构将不同尺寸的特征图的特征信息进行融合处理;4)、将融合处理得到的特征图输入到预测网络中,输出目标类型和位置信息。本发明所述的基于YOLOv5神经网络的无人机检测方法,通过多尺度特征融合,能提高对小目标检测的准确率,不仅有效提高了无人机的检测速度,也大幅提高了检测的准确率。

Figure 202111220550

The present invention provides a UAV detection method based on the YOLOv5 neural network, comprising the following steps: 1), obtaining relevant pictures of the UAV, performing image preprocessing to obtain a UAV data set; 2), converting the UAV into The dataset is input into the BottleneckCSP backbone network with the Focus layer, and multiple feature maps of different sizes are obtained; 3), multiple feature maps of different sizes are first pooled, and the FPN and PAN structures are used to combine the features of different sizes. The feature information of the map is fused; 4), the feature map obtained by the fusion process is input into the prediction network, and the target type and location information are output. The UAV detection method based on the YOLOv5 neural network of the present invention can improve the accuracy of small target detection through multi-scale feature fusion, which not only effectively improves the detection speed of the UAV, but also greatly improves the detection accuracy. Rate.

Figure 202111220550

Description

Unmanned aerial vehicle detection method based on YOLOv5 neural network
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle detection, and particularly relates to an unmanned aerial vehicle detection method and system based on a YOLOv5 neural network.
Background
The existing unmanned aerial vehicle detection technology mainly comprises the following types: radio detection, acoustic detection, radar detection, photoelectric detection means and the like, and each technology has corresponding advantages and disadvantages:
radio detection equipment carries out unmanned aerial vehicle's discernment and location through the radio signal that detects transmission data, belongs to passive detection means, and is far away to the detection distance of formula unmanned aerial vehicle. However, the radio detection equipment is easily interfered by other external radio signals, and a non-cooperative unmanned aerial vehicle or a remote control unmanned aerial vehicle communicating with 5G signals cannot be identified due to the complex electromagnetic environment of the civil aviation airport.
Acoustic detection is different based on the unmanned aerial vehicle sound of different models, can use sound check out test set discernment unmanned aerial vehicle's type, through multi-direction microphone array, catches unmanned aerial vehicle unique high frequency motor sound when the function, handles and filters the sound wave, and the analysis confirms specific frequency to confirm near unmanned aerial vehicle's existence. Acoustic detection is susceptible to interference from ambient noise and is therefore suitable for quieter environments, prone to failure due to higher noise in urban areas or noisy environments, and has an effective detection distance that is too close, for example only 200m for the product of DroneShield corporation in the united states and only 150m for the product of Alsok corporation in japan.
The target detection means based on the radar detection equipment is slightly influenced by weather factors such as cloud, fog, rain, snow and the like, has strong penetrating power, has the characteristics of all weather and all-time, is suitable for long-distance target detection, and can reach the detection distance of 5-7km to a small target. But it may interfere with the existing electronics of the airport in the near range of the airport; the echo of a large target in a complex environment of an airport can be mixed with the echo of a small target of an unmanned aerial vehicle or a flying bird, so that the detection performance of the small target is influenced; in addition, after the radar equipment is used for detecting the unmanned aerial vehicle or the flying bird, the accurate picture of the target cannot be intuitively acquired, and a supervisor cannot easily identify whether the target is the unmanned aerial vehicle from the acquired data; when a plurality of targets close to each other appear simultaneously, the number of the targets cannot be correctly judged by the radar technology, and the potential safety hazard is undoubtedly left for an airport.
The detection distance of the photoelectric technology can reach 2km, and the maximum advantage is that a visible light image of a target can be obtained. The influence under adverse conditions such as rain, fog, snow is great, and the detection and the discernment of unmanned aerial vehicle can be influenced to a certain extent to photoelectricity visible light image quality is not good, nevertheless under good weather, photoelectricity can obtain high-quality visible light image.
Can combine radar equipment and opto-electronic equipment's advantage, utilize the position of radar check out test set detection target, send positional information for opto-electronic equipment, guide opto-electronic equipment to turn to the direction of target place to utilize opto-electronic equipment to acquire the visible light image, this kind of mode, can be quick, accurate acquisition unmanned aerial vehicle's image information, this application is based on the image information who acquires, realizes unmanned aerial vehicle's detection discernment in the image, be used for improving precision and response speed to unmanned aerial vehicle detection.
Disclosure of Invention
In view of this, the present invention is directed to a method and a system for detecting an unmanned aerial vehicle based on a YOLOv5 neural network, which are used to realize fast identification of the unmanned aerial vehicle in an image on the acquired image information.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, the invention provides a method for detecting an unmanned aerial vehicle based on a YOLOv5 neural network, which comprises the following steps:
1) acquiring relevant pictures of the unmanned aerial vehicle, and performing image preprocessing to obtain an unmanned aerial vehicle data set;
2) inputting the unmanned aerial vehicle data set into a BottleneckCSP backbone network with a Focus layer to obtain a plurality of feature maps with different sizes;
3) the method comprises the steps that a plurality of feature graphs with different sizes are subjected to pooling processing, and feature information of the feature graphs with different sizes is subjected to fusion processing by utilizing FPN and PAN structures;
4) and inputting the feature diagram obtained by the fusion processing into a prediction network, and outputting the target type and the position information.
Further, in step 1, the image preprocessing method is as follows:
and increasing the number of the pictures of the acquired unmanned aerial vehicle related pictures by adopting a data amplification method, and labeling the pictures to obtain an unmanned aerial vehicle data set.
Further, the data amplification method is as follows:
and expanding the number of picture samples by image splicing, rotation and noise addition.
Further, in step 2, a processing method of the data set of the unmanned aerial vehicle in the Focus layer is as follows:
the input is copied into four, and then cut into four 3 × 320 × 320 slices by a slicing operation, the four slices are connected in depth by tensor stitching, and then 32 × 320 × 320 output is generated by a convolutional layer having a convolutional kernel number of 32, and the result is input to the next convolutional layer.
Further, in step 2, the bottleeckcsp backbone network includes several 1 × 1 and 3 × 3 convolutional layers, each of which is followed by a BN layer and a Mish layer.
Further, in step 3, pooling processing is performed on feature maps of different sizes by using the SPP structure, specifically: pooling and stacking was performed with pooling cores of sizes 5,9, 13.
Further, in step 2, the number of the obtained feature maps with different sizes is 3;
in step 3, a specific method for performing fusion processing on the feature information of the feature maps with different sizes by using the FPN and PAN structures is as follows:
carrying out tensor splicing on the characteristic diagram with the minimum dimension of 20 × 20 and a 40 × 40 characteristic diagram output by a previous network through one-time upsampling in the FPN structure, then carrying out tensor splicing on the characteristic diagram with the 80 × 80 characteristic diagram output by the previous network through one-time upsampling, and outputting a characteristic diagram predicted with the maximum dimension of 80 × 80; then, downsampling the 80 × 80 feature map in the PAN structure for one time, carrying out tensor splicing with the 40 × 40 feature map of the previous network, and outputting a feature map with a mesoscale of 40 × 40 prediction; finally, the 40 × 40 feature map is subjected to downsampling again, and tensor splicing is performed on the 40 × 40 feature map and the 20 × 20 feature map of the previous network, so that the feature map with the minimum dimension of 20 × 20 is output.
Further, in step 4, the regression loss function of the prediction network is:
Figure BDA0003312428220000041
wherein A, B is two arbitrary rectangle frames, and C is the minimum bounding rectangle of A and B.
In a second aspect, the present invention provides an electronic device, including a processor, and a memory communicatively coupled to the processor and configured to store instructions executable by the processor, wherein: the processor, when executing the instructions, implements the steps of the method for detecting a drone based on the YOLOv5 neural network according to the first aspect.
In a third aspect, the present invention provides a server comprising at least one processor, and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor, wherein: the instructions are executable by the processor to cause the at least one processor to perform the steps of the YOLOv5 neural network-based drone detection method according to the first aspect above.
Compared with the prior art, the unmanned aerial vehicle detection method and system based on the YOLOv5 neural network have the following beneficial effects:
(1) the unmanned aerial vehicle detection method based on the YOLOv5 neural network can improve the accuracy of small target detection through multi-scale feature fusion, not only effectively improves the detection speed of the unmanned aerial vehicle, but also greatly improves the accuracy of detection.
(2) The unmanned aerial vehicle detection method based on the YOLOv5 neural network is simple in training, easy to operate, capable of avoiding complex and tedious operations and high in usability.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a network structure diagram of an unmanned aerial vehicle detection method based on the YOLOv5 neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a CSP structure according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an SPP module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of FPN and PAN structures according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a process of calculating a loss function GIOU according to an embodiment of the present invention;
fig. 6 is a detection result diagram of the method for detecting an unmanned aerial vehicle based on the YOLOv5 neural network according to the embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The first embodiment is as follows:
the embodiment provides an unmanned aerial vehicle detection method based on a YOLOv5 neural network, as shown in fig. 1, including the following steps:
1) acquiring relevant pictures of the unmanned aerial vehicle, and performing image preprocessing to obtain an unmanned aerial vehicle data set;
2) inputting an unmanned aerial vehicle data set into a BottleneckCSP backbone network with a Focus layer to obtain a plurality of characteristic diagrams with different sizes, wherein the Focus layer is a neural network structure;
3) the method comprises the steps that a plurality of feature graphs with different sizes are subjected to pooling processing, and feature information of the feature graphs with different sizes is subjected to fusion processing by utilizing FPN and PAN structures;
4) and inputting the feature diagram obtained by the fusion processing into a prediction network, and outputting the target type and the position information.
In step 1, the image preprocessing method is as follows:
and increasing the number of the pictures of the acquired unmanned aerial vehicle related pictures by adopting a data amplification method, and labeling the pictures to obtain an unmanned aerial vehicle data set.
The data amplification method is as follows:
and expanding the number of picture samples by image splicing, rotation and noise addition.
In step 2, the processing method of the unmanned aerial vehicle data set in the Focus layer is as follows:
the input is copied into four, and then cut into four 3 × 320 × 320 slices by a slicing operation, the four slices are connected in depth by tensor stitching, and then 32 × 320 × 320 output is generated by a convolutional layer having a convolutional kernel number of 32, and the result is input to the next convolutional layer.
In step 2, the backbone network structure of the bottleeckcsp is as follows, as shown in fig. 2, fig. 2 is a schematic diagram of the CSP structure, and the CSP can enhance the learning ability of the neural network, maintain accuracy while reducing the weight, reduce the computation bottleneck, and reduce the memory cost. The BottleneckCSP is based on Darknet53, and adds CSP structure on each large residual block, the BottleneckCSP network is composed of a series of 1 × 1 and 3 × 3 convolutional layers, and each convolutional layer is followed by a BN layer and a Mish layer.
In step 3, the feature maps with different sizes are subjected to pooling treatment by using the SPP structure, and the method comprises the following steps: pooling and stacking with pooling cores of sizes 5,9,13, as follows:
as shown in fig. 3, the SPP structure can be used to solve the problem of non-uniform size of the input image, and the fusion of features with different sizes in the SPP is beneficial to the situation of large difference of the sizes of the targets in the image to be detected, especially for complex multi-target images. In the figure, the whole area is directly pooled, each layer obtains one point, and 256 points are obtained in total to form a 1 x 256 vector; secondly, dividing the region into 4 grids of 2 × 2, and pooling each grid to obtain 4 vectors of 1 × 256; and finally, dividing the region into 16 grids of 4 multiplied by 4, pooling each grid to obtain 16 vectors of 1 multiplied by 256, and finally splicing the results obtained by pooling in the three division modes. As can be seen from the figure, the whole process is completely independent of the size of the input, so that candidate boxes with any size can be processed. After the characteristic diagram of the embodiment of the invention is input into the SPP structure, pooling and stacking are carried out by using pooling cores with the sizes of 5,9 and 13.
In the step 2, the number of the obtained feature maps with different sizes is 3;
in step 3, as shown in fig. 4, fig. 4 is a schematic diagram of FPN and PAN structures, feature fusion is performed on feature maps of different scales through the FPN and PAN structures, and three feature maps of different scales are output, where the FPN and PAN structures are both neural networks.
The specific method for fusing the feature information of the feature maps with different sizes by using the FPN and PAN structures is as follows:
carrying out tensor splicing on the characteristic diagram with the minimum dimension of 20 × 20 and a 40 × 40 characteristic diagram output by a previous network through one-time upsampling in the FPN structure, then carrying out tensor splicing on the characteristic diagram with the 80 × 80 characteristic diagram output by the previous network through one-time upsampling, and outputting a characteristic diagram predicted with the maximum dimension of 80 × 80; then, downsampling the 80 × 80 feature map in the PAN structure for one time, carrying out tensor splicing with the 40 × 40 feature map of the previous network, and outputting a feature map with a mesoscale of 40 × 40 prediction; finally, the 40 × 40 feature map is subjected to downsampling again, and tensor splicing is performed on the 40 × 40 feature map and the 20 × 20 feature map of the previous network, so that the feature map with the minimum dimension of 20 × 20 is output.
In step 4, as shown in fig. 5, the regression loss function of the prediction network is:
Figure BDA0003312428220000081
wherein A, B is two arbitrary rectangle frames, and C is the minimum bounding rectangle of A and B.
Fig. 6 is a detection result diagram of the method for detecting an unmanned aerial vehicle based on the YOLOv5 neural network according to the embodiment of the present invention;
table 1 compares the performance of the inventive examples with the YOLOv4 neural network.
TABLE 1
Figure BDA0003312428220000091
Precision in table 1 is the Precision, that is, the correct ratio is predicted in the data predicted as positive type; recall is the Recall rate, i.e., the correct rate is predicted in all positive classes of data. The calculation formula is as follows:
Figure BDA0003312428220000092
Figure BDA0003312428220000093
wherein, tp (true positive) represents the accuracy of predicting as positive class; FP (false positive) represents the false alarm rate of predicting a negative class as a positive class; fn (false positive) indicates the false negative rate of predicting a positive class as a negative class.
AP (average Precision) is the average Precision, and the calculation method is the area enclosed by the Precision-Recall curve.
In a second aspect, the present invention provides an electronic device, including a processor, and a memory communicatively coupled to the processor and configured to store instructions executable by the processor, wherein: when the processor executes the instructions, the steps of the method for detecting an unmanned aerial vehicle based on the YOLOv5 neural network according to the above embodiment are implemented, and as for a hardware structure of an electronic device, the method can be implemented by using the prior art, and details are not repeated here.
In a third aspect, the present invention provides a server comprising at least one processor, and a memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor, wherein: the instructions are executed by the processor, so that the at least one processor performs the steps of the method for detecting a drone based on the YOLOv5 neural network according to the foregoing embodiment, and as for the hardware structure of the server, the hardware structure can be implemented by using the prior art, and details are not described here.
Those of ordinary skill in the art will appreciate that the various illustrative systems and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed method and system may be implemented in other ways. For example, the above described division of elements is merely a logical division, and other divisions may be realized, for example, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not executed. The units may or may not be physically separate, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1.基于YOLOv5神经网络的无人机检测方法,其特征在于,包括以下步骤:1. the unmanned aerial vehicle detection method based on YOLOv5 neural network, is characterized in that, comprises the following steps: 1)、获取无人机相关图片,进行图像预处理,得到无人机数据集;1), obtain UAV-related pictures, perform image preprocessing, and obtain UAV data sets; 2)、将无人机数据集输入到带有Focus层的BottleneckCSP主干网络,得到多个不同尺寸的特征图;2) Input the UAV dataset into the BottleneckCSP backbone network with the Focus layer to obtain multiple feature maps of different sizes; 3)、多个不同尺寸的特征图,先经过池化处理,并利用FPN和PAN结构将不同尺寸的特征图的特征信息进行融合处理;3) Multiple feature maps of different sizes are first pooled, and the feature information of the feature maps of different sizes is fused by using the FPN and PAN structures; 4)、将融合处理得到的特征图输入到预测网络中,输出目标类型和位置信息。4) Input the feature map obtained by fusion processing into the prediction network, and output the target type and location information. 2.根据权利要求1所述的基于YOLOv5神经网络的无人机检测方法,其特征在于,步骤1中,图像预处理的方法如下:2. the unmanned aerial vehicle detection method based on YOLOv5 neural network according to claim 1, is characterized in that, in step 1, the method for image preprocessing is as follows: 将获取的无人机相关图片采用数据扩增的方法增加图片数量,再对图片进行标注,得到无人机数据集。The obtained UAV-related pictures are increased by the method of data amplification, and then the pictures are marked to obtain the UAV data set. 3.根据权利要求2述的基于YOLOv5神经网络的无人机检测方法,其特征在于:数据扩增的方法如下:3. the unmanned aerial vehicle detection method based on YOLOv5 neural network according to claim 2, is characterized in that: the method for data amplification is as follows: 通过图像拼接、旋转、加入噪声的方法,扩充图片样本数量。The number of image samples is expanded by image stitching, rotation, and adding noise. 4.根据权利要求1所述的基于YOLOv5神经网络的无人机检测方法,其特征在于,步骤2中,无人机数据集在Focus层中的处理方法为:4. the drone detection method based on YOLOv5 neural network according to claim 1, is characterized in that, in step 2, the processing method of drone data set in Focus layer is: 将输入复制四份,然后通过切片操作切成四个3×320×320的切片,使用张量拼接从深度上连接这四个切片,再通过卷积核数为32的卷积层,生成32×320×320的输出,将结果输入到下一个卷积层。Copy the input four times, then cut it into four 3×320×320 slices through the slicing operation, use tensor stitching to connect the four slices from the depth, and then pass the convolution layer with the number of convolution kernels to 32 to generate 32 ×320×320 output, feed the result to the next convolutional layer. 5.根据权利要求1所述的基于YOLOv5神经网络的无人机检测方法,其特征在于:步骤2中,BottleneckCSP主干网络包括若干个1×1和3×3的卷积层,每个卷积层后跟一层BN层和一层Mish层。5. The UAV detection method based on YOLOv5 neural network according to claim 1, characterized in that: in step 2, the BottleneckCSP backbone network comprises several 1×1 and 3×3 convolutional layers, each convolutional layer layer is followed by a BN layer and a Mish layer. 6.根据权利要求1所述的基于YOLOv5神经网络的无人机检测方法,其特征在于,步骤3中,利用SPP结构对不同尺寸的特征图进行池化处理,具体为:利用大小为5,9,13的池化核进行池化并堆叠。6. the UAV detection method based on YOLOv5 neural network according to claim 1, is characterized in that, in step 3, utilizes SPP structure to carry out pooling processing to the feature map of different size, is specifically: the utilization size is 5, The pooling kernels of 9,13 are pooled and stacked. 7.根据权利要求1所述的基于YOLOv5神经网络的无人机检测方法,其特征在于,步骤2中,得到的不同尺寸的特征图的数量为3个;7. the unmanned aerial vehicle detection method based on YOLOv5 neural network according to claim 1, is characterized in that, in step 2, the quantity of the feature maps of different sizes obtained is 3; 步骤3中,利用FPN和PAN结构将不同尺寸的特征图的特征信息进行融合处理的具体方法如下:In step 3, the specific method of using the FPN and PAN structures to fuse the feature information of the feature maps of different sizes is as follows: 最小尺度20×20的特征图在FPN结构中通过一次上采样与之前网络输出的40×40的特征图进行张量拼接,之后再进行一次上采样与之前网络输出的80×80的特征图进行张量拼接,输出最大尺度为80×80预测的特征图;之后在PAN结构中80×80的特征图进行一次下采样,并与之前网络的40×40的特征图进行张量拼接,输出中尺度为40×40预测的特征图;最后由40×40的特征图继续进行一次下采样,并与之前网络的20×20的特征图进行张量拼接,输出最小尺度为20×20的特征图。The feature map with a minimum scale of 20 × 20 is tensor-spliced with the 40 × 40 feature map output by the previous network through one upsampling in the FPN structure, and then upsampling is performed again with the 80 × 80 feature map output by the previous network. Tensor splicing, output the feature map with a maximum scale of 80×80 prediction; then downsample the 80×80 feature map in the PAN structure once, and perform tensor splicing with the 40×40 feature map of the previous network, in the output The feature map predicted by the scale is 40×40; finally, the 40×40 feature map is further downsampled, and tensor spliced with the 20×20 feature map of the previous network, and the output feature map with a minimum scale of 20×20 . 8.根据权利要求1所述的基于YOLOv5神经网络的无人机检测方法,其特征在于,步骤4中,预测网络的回归损失函数为:8. the drone detection method based on YOLOv5 neural network according to claim 1, is characterized in that, in step 4, the regression loss function of prediction network is:
Figure FDA0003312428210000031
Figure FDA0003312428210000031
其中,A、B为两个任意矩形框,C为A和B的最小外接矩形。Among them, A and B are two arbitrary rectangular boxes, and C is the smallest circumscribed rectangle of A and B.
9.一种电子设备,包括处理器,以及与处理器通信连接,且用于存储所述处理器可执行指令的存储器,其特征在于:所述处理器执行所述指令时实现权利要求1-8任一项所述的基于YOLOv5神经网络的无人机检测方法的步骤。9. An electronic device, comprising a processor, and a memory communicatively connected to the processor and used to store instructions executable by the processor, characterized in that: when the processor executes the instructions, claims 1- 8. The steps of any one of the described methods for UAV detection based on YOLOv5 neural network. 10.一种服务器,包括至少一个处理器,以及与所述处理器通信连接的存储器,所述存储器存储有可被所述至少一个处理器执行的指令,其特征在于:所述指令被所述处理器执行,以使所述至少一个处理器执行如权利要求1-8任一所述的基于YOLOv5神经网络的无人机检测方法的步骤。10. A server comprising at least one processor, and a memory communicatively connected to the processor, the memory storing instructions executable by the at least one processor, wherein the instructions are executed by the processor The processor executes, so that the at least one processor executes the steps of the YOLOv5 neural network-based UAV detection method according to any one of claims 1-8.
CN202111220550.0A 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network Pending CN113822372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111220550.0A CN113822372A (en) 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111220550.0A CN113822372A (en) 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network

Publications (1)

Publication Number Publication Date
CN113822372A true CN113822372A (en) 2021-12-21

Family

ID=78920559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111220550.0A Pending CN113822372A (en) 2021-10-20 2021-10-20 Unmanned aerial vehicle detection method based on YOLOv5 neural network

Country Status (1)

Country Link
CN (1) CN113822372A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596431A (en) * 2022-03-10 2022-06-07 北京百度网讯科技有限公司 Information determination method and device and electronic equipment
CN115049897A (en) * 2022-06-17 2022-09-13 陕西智引科技有限公司 Underground robot detection system based on improved YoloV5 neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN111401410A (en) * 2020-02-27 2020-07-10 江苏大学 A Traffic Sign Detection Method Based on Improved Cascaded Neural Network
CN112200161A (en) * 2020-12-03 2021-01-08 北京电信易通信息技术股份有限公司 A Face Recognition Detection Method Based on Hybrid Attention Mechanism
CN113139594A (en) * 2021-04-19 2021-07-20 北京理工大学 Airborne image unmanned aerial vehicle target self-adaptive detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN111401410A (en) * 2020-02-27 2020-07-10 江苏大学 A Traffic Sign Detection Method Based on Improved Cascaded Neural Network
CN112200161A (en) * 2020-12-03 2021-01-08 北京电信易通信息技术股份有限公司 A Face Recognition Detection Method Based on Hybrid Attention Mechanism
CN113139594A (en) * 2021-04-19 2021-07-20 北京理工大学 Airborne image unmanned aerial vehicle target self-adaptive detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨晓玲 等: "基于yolov5的交通标志识别检测", 《信息技术与信息化》, no. 4, pages 28 - 30 *
言有三: "《深度学习之人脸图像处理 核心算法与案例实战》", 31 July 2020, 机械工业出版社, pages: 106 - 107 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596431A (en) * 2022-03-10 2022-06-07 北京百度网讯科技有限公司 Information determination method and device and electronic equipment
CN115049897A (en) * 2022-06-17 2022-09-13 陕西智引科技有限公司 Underground robot detection system based on improved YoloV5 neural network

Similar Documents

Publication Publication Date Title
CN111326023B (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
US10514711B2 (en) Flight control using computer vision
KR102661954B1 (en) A method of processing an image, and apparatuses performing the same
KR102526542B1 (en) 2d vehicle localizing using geoarcs
CN110663060B (en) Method, device, system and vehicle/robot for representing environmental elements
WO2022179207A1 (en) Window occlusion detection method and apparatus
EP3291178B1 (en) 3d vehicle localizing using geoarcs
CN113822372A (en) Unmanned aerial vehicle detection method based on YOLOv5 neural network
CN110390706A (en) A kind of method and apparatus of object detection
CN112650300A (en) Unmanned aerial vehicle obstacle avoidance method and device
CN115082690B (en) Target recognition method, target recognition model training method and device
CN116597402A (en) Scene perception method and related equipment thereof
US20210270959A1 (en) Target recognition from sar data using range profiles and a long short-term memory (lstm) network
CN115019060A (en) Target recognition method, and training method and device of target recognition model
CN115131756B (en) Target detection method and device
CN116912483A (en) Target detection method, electronic device and storage medium
WO2021218347A1 (en) Clustering method and apparatus
Sulaj et al. Examples of real-time UAV data processing with cloud computing
CN115482277A (en) Social distance risk early warning method and device
CN115755941A (en) Service processing method, device, equipment and storage medium
CN116681884B (en) Object detection method and related device
CN117315402B (en) Training method of three-dimensional object detection model and three-dimensional object detection method
JP7598307B2 (en) MODEL CREATION DEVICE, PROPAGATION CHARACTERISTICS SPECIFICATION DEVICE, MODEL CREATION METHOD, AND PROGRAM
CN118965257A (en) Aircraft identification method, device, electronic device and computer readable medium
CN116778447A (en) Training method of target detection model, target detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211221

RJ01 Rejection of invention patent application after publication