[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117351353A - Crop pest real-time detection method and device based on deep learning and computer storage medium - Google Patents

Crop pest real-time detection method and device based on deep learning and computer storage medium Download PDF

Info

Publication number
CN117351353A
CN117351353A CN202311333787.9A CN202311333787A CN117351353A CN 117351353 A CN117351353 A CN 117351353A CN 202311333787 A CN202311333787 A CN 202311333787A CN 117351353 A CN117351353 A CN 117351353A
Authority
CN
China
Prior art keywords
module
shuffle
crop
pest
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311333787.9A
Other languages
Chinese (zh)
Other versions
CN117351353B (en
Inventor
施坤昊
孙国栋
柯承康
仇志文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu Institute of Technology
Original Assignee
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshu Institute of Technology filed Critical Changshu Institute of Technology
Priority to CN202311333787.9A priority Critical patent/CN117351353B/en
Priority claimed from CN202311333787.9A external-priority patent/CN117351353B/en
Publication of CN117351353A publication Critical patent/CN117351353A/en
Application granted granted Critical
Publication of CN117351353B publication Critical patent/CN117351353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Catching Or Destruction (AREA)

Abstract

The invention discloses a crop disease and pest real-time detection method based on deep learning, which comprises the following steps: collecting different pest and disease types and normal images of different crops, preprocessing the collected pest and disease images, and constructing a crop pest and disease database; constructing a target detection network model, training, and detecting whether the plant diseases and insect pests are infected or not by the trained network model; the target detection network model is improved based on YOLOv5s, a CBSM module, a shuffle_block module in a shuffleNetv2 and an SPPF module replace features to extract a backbone network, the shuffle_block module is used for leading out two jump connections from the backbone network to two Concat modules in a Neck network, and an attention mechanism GAM module is added between the Neck network and a head module. The invention also discloses a crop pest real-time detection device based on deep learning and a computer storage medium storing a computer program for realizing the method. The invention can improve the detection speed, reduce missed detection, enhance the robustness of the model and adapt to more complex environments.

Description

Crop pest real-time detection method and device based on deep learning and computer storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a crop disease and pest real-time detection method and device based on deep learning and a computer storage medium.
Background
In the prior art, deep learning models typically require a significant amount of computational resources and time to train and reason, which leads to real-time problems. In the research of plant diseases and insect pests, timely and accurate detection and identification of the plant diseases and insect pests are important to the protection of crops, but the existing partial deep learning model may not meet the requirement of real-time performance.
Deep learning models also have certain problems in terms of accuracy. Since the symptoms and characteristics of the plant diseases and insect pests may be affected by environmental factors, the model may generate misjudgment or misjudgment. In addition, the model may have certain confusion and misjudgment when distinguishing different diseases and insect pests. Meanwhile, due to the diversity and complexity of diseases and insect pests, the accuracy still has a certain challenge. Especially for emerging or rare diseases, the accuracy of the model may be low.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a crop pest real-time detection method based on deep learning, which solves the problem that a model is difficult to accurately detect targets due to the fact that pest targets are too small or too large caused by different sizes and shapes of pests and images captured by cameras are too far and near to objects, and improves detection robustness. The invention provides a crop disease and pest real-time detection device based on deep learning and a computer storage medium.
The technical scheme of the invention is as follows: a crop pest real-time detection method based on deep learning comprises the following steps:
step 1, acquiring different pest and disease types and normal images of different crops, preprocessing the collected pest and disease images, and constructing a crop pest and disease database;
step 2, constructing a target detection network model;
step 3, training the target detection network model by utilizing the crop disease and insect pest database obtained in the step 1 to obtain a trained network model;
step 4, acquiring an image of crops to be detected, and detecting whether the crops are infected with diseases and insect pests or not by the trained network model;
wherein the object detection network model is based on YOLOv5s improvement, the improvement comprises replacing the feature extraction backbone network of YOLOv5s by a CBSM module, a shuffle_block module in Shuffle net v2 and an SPPF module of YOLOv5s, leading out two jump connections from the backbone network to two Concat modules in a PAN of a negk network of YOLOv5s by the shuffle_block module, and adding a attention mechanism GAM module between the negk network of YOLOv5s and the head module.
Further, the shuffle_block module includes a first shuffle_block module and a second shuffle_block module, the structure of the feature extraction backbone network of the replacement YOLOv5s is a CBSM module, a first shuffle_block module, a second shuffle_block module, a first shuffle_block module and an SPPF module which are sequentially connected, the jump connection is led out by two second shuffle_block modules, the first shuffle_block module is composed of unit2 and unit1×3 modules, the second shuffle_block module is composed of unit2 and unit1×3 modules, the unit1 is a block module when the step size in the Shuffle net v2 is 1, and the unit2 is a block module when the step size in the Shuffle net v2 is 2.
Further, a convolution module with a convolution kernel size of 1*1, a step size of 1 and a padding of 0 is added after the two Concat modules with the jump connection.
Further, when the target detection network model is trained in the step 3, the data in the crop disease and insect pest database is sequentially subjected to Mosaic data enhancement, self-adaptive anchor frame calculation and self-adaptive picture scaling processing, and then the target detection network model is input, and when the target detection network model is trained, a Focal Loss function Focal Loss and a non-maximum suppression algorithm based on a DIoU-NMS distance intersection ratio are used for replacing a NMS non-maximum suppression algorithm of YOLOv5 s.
Further, the Focal Loss function Focal Loss is defined as
In the formula (1-p) t ) γ For adjusting factor, gamma is larger than or equal to 0 and is an adjustable focusing parameter, p y For predictive probability, the value is between 0 and 1. Reflects the proximity to category y, p t The larger the specification, the closer the class y, i.e., the more accurate the classification; at the same time, reflect the difficulty of classification, p t The larger the confidence that the specification classification is, the higher the sample is. For samples with accurate classification, p t Approach 1, (1-p) t ) γ Approaching 0, i.e. reducing the weight of the classification accurate sample in the loss function; the weight factor alpha is introduced to reduce the weight of the positive or negative samples to solve the problem of sample imbalance.
Further, the non-maximum suppression algorithm based on the DIoU-NMS distance intersection ratio replaces the NMS non-maximum suppression algorithm of YOLOv5s by a DIoU loss function instead of IoU loss function, which is defined as
b and b gt Prediction box B and target box B, respectively gt Center point ρ of (1) 2 (b,b gt ) C is the square of the Euclidean distance, and C is the coverage prediction box B and the target box B gt Is defined, the diagonal length of the smallest bounding box of (a).
Further, the preprocessing in the step 1 includes rotation, scaling, clipping, occlusion, color change, gaussian noise addition and data labeling.
The invention also provides a crop pest real-time detection device based on deep learning, which comprises:
and a data acquisition module: collecting different pest and disease types and normal images of different crops, preprocessing the collected pest and disease images, and constructing a crop pest and disease database;
and (3) detecting a network module: constructing a target detection network model;
and the network training module: training the target detection network model by utilizing the crop disease and pest database obtained by the data acquisition module to obtain a trained network model;
and, a detection module: collecting crop images to be detected, and detecting whether the crop is infected by the plant diseases and insect pests or not by the trained network model;
wherein the object detection network model is based on YOLOv5s improvement, the improvement comprises replacing the feature extraction backbone network of YOLOv5s by a CBSM module, a shuffle_block module in Shuffle net v2 and an SPPF module of YOLOv5s, leading out two jump connections from the backbone network to two Concat modules in a PAN of a negk network of YOLOv5s by the shuffle_block module, and adding a attention mechanism GAM module between the negk network of YOLOv5s and the head module.
Still another aspect of the present invention is a computer storage medium having a computer program stored thereon, which when executed by a processor, implements the method for detecting crop diseases and insect pests in real time based on deep learning.
The technical scheme provided by the invention has the advantages that:
the two jump connection of two Concat modules in the PAN from the backbone network to the YOLOv5s are led out by the shuffle_block module to form a BiFPN structure, the Concat modules are given weight to strengthen the utilization of effective information by the network, richer detail information and semantic information are fused with lower cost, and the scale of the feature pyramid can be adaptively adjusted according to the importance and contribution of the features. If the features of a level contribute significantly to the object detection task, the BiFPN will increase the resolution of the feature map of that level. Conversely, if the features of a level contribute less to the task, the BiFPN reduces the resolution of the feature map of that level. Furthermore, the method can be better suitable for plant diseases and insect pests targets with different scales, and the robustness of the model is improved. And meanwhile, a self-attention mechanism is added, so that the context information of the object is better captured when the object is detected, and the accuracy of the model is improved.
The strategy of improving the YOLOv5s model by adopting the module of the lightweight network SheffleNetv 2 can achieve the purpose of real-time dynamic monitoring, and the reaction speed is high. Meanwhile, focal Loss and DIoU-NMS are adopted to improve the training process, so that diseases and insect pests shielded by branches and leaves and the like can be more effectively detected, the method is suitable for more complex environments, missing detection is reduced, and the robustness of the model is further enhanced.
Drawings
Fig. 1 is a schematic flow chart of a crop pest real-time detection method based on deep learning.
FIG. 2 is a schematic diagram of a network model for object detection according to the present invention.
FIG. 3 is a schematic diagram of a prior art YOLOv5s model.
Fig. 4 is a schematic diagram of a first shuffle_block module structure.
Fig. 5 is a schematic diagram of a second shuffle_block module structure.
Fig. 6 is a schematic diagram of a unit1 module structure.
Fig. 7 is a schematic diagram of a unit2 module structure.
Fig. 8 is a graph of the result of PR curve.
Fig. 9 is a result diagram of P (precision).
FIG. 10 is a graph of the results of F1 (the harmonic mean of the precision and recall).
Fig. 11 is a graph of the result of R (recall).
Fig. 12 is a graph showing the result of loss and accuracy visualization.
Detailed Description
The present invention is further described below with reference to examples, which are to be construed as merely illustrative of the present invention and not limiting of its scope, and various modifications to the equivalent arrangements of the present invention will become apparent to those skilled in the art upon reading the present description, which are within the scope of the appended claims.
The crop pest real-time detection device based on deep learning provided by the embodiment of the invention comprises a data acquisition module, a detection network module, a network training module and a detection module. The data acquisition module is used for acquiring different pest and disease types and normal images of different crops, preprocessing the collected pest and disease images and constructing a crop pest and disease database; the detection network module is used for constructing a target detection network model; the network training module is used for training the target detection network model by utilizing the crop disease and insect pest database obtained by the data acquisition module to obtain a trained network model; the detection module is used for collecting images of crops to be detected, and detecting whether the crops are infected by the plant diseases and insect pests or not by the trained network model.
Referring to fig. 1, the method for detecting crop diseases and insect pests based on deep learning, which is implemented by detecting crop diseases and insect pests based on deep learning in real time, is as follows:
step 1, obtaining images of crop diseases and insect pests on a network by using crawlers, wherein corn is taken as an example, eight kinds of diseases and insect pests are respectively: large spot, mad top, brown spot, tumor smut, head smut, ear rot, small spot and rust.
Different pest and disease types and normal images of different crops are acquired. Preprocessing the acquired image, wherein the preprocessing operation involves rotation, scaling, clipping, shielding, color change, gaussian noise addition and data labeling, and the preprocessed image constructs a final crop disease and pest database.
And 2, constructing a target detection network model based on the YOLOv5s model, as shown in fig. 2.
The target detection network model constructed in this step is that based on the YOLOv5s model (as shown in fig. 3), a feature extraction Backbone network (backhaul) is replaced by a CBSM module, a first shuffle_block module, a second shuffle_block module, a first shuffle_block module and an SPPF module, which are sequentially connected. The SPPF module is an original module in the main network extracted by the features of the YOLOv5s model.
The CBSM module consists of a convolution layer, a BN layer, a SiLu activation layer and a max pooling layer, respectively, wherein the convolution layer has a convolution kernel size of 3*3, a stride of 1, a fill of 0, and a convolution kernel number of 64. The window size of the largest pooling layer is 2 x 2, with a stride of 2.
As shown in fig. 4 and fig. 5, the first shuffle_block module and the second shuffle_block module are both composed of block modules in the lightweight network Shuffle 2, and the specific first shuffle_block module is composed of unit2 and unit1×3 modules, and the first shuffle_block module is composed of unit2 and unit1×5 modules. As shown in fig. 6 and 7, unit1 is a block module when the step size is 1 in shuffle net v2, and unit2 is a block module when the step size is 2 in shuffle net v 2.
In the feature extraction backbone network formed by the CBSM module, the first Shuffle block module, the second Shuffle block module, the first Shuffle block module, and the SPPF module, the outputs of the two second Shuffle block modules and the SPPF module are respectively connected to the Neck network of the YOLOv5s model, and the outputs of the two Shuffle block modules are connected to the two Concat modules of the original PAN part in the Neck network of the YOLOv5s model in a jumping manner. And a convolution module is added after the two Concat modules respectively, the convolution kernel of the convolution module is 1*1, the step length is 1, the filling is 0, and the number of the convolution kernels is half of the number of the original channels. A weighted bidirectional feature pyramid network (Bi-directional Feature Pyramid Network, biFPN) structure is constructed in the Neck network, the original feature pyramid (Feature Pyramid Network, FPN) and path aggregation network (Path Aggregation Network, PAN) are improved, and multi-scale fusion is carried out on different feature graphs.
And finally, connecting an attention mechanism GAM module between a Neck network and a Head network of the Yolov5s model, wherein each layer of the Head network layer is provided with a GAM module with a self-attention mechanism for enhancing global pest characteristic information extraction. Thus, an improved target detection network model based on the YOLOv5s model is constructed.
And step 3, training the target detection network model constructed in the step 2 by utilizing the crop disease and pest database obtained in the step 1 to obtain a trained network model.
During training, image metal data enhancement, self-adaptive anchor frame calculation and self-adaptive picture scaling are carried out on the input end of the trained network model, so that the reasoning speed of the model is improved. And the Focal Loss function is adopted, and on the basis of the balance cross entropy Loss function, the Focal Loss function reduces the weight of the sample easy to classify and increases the weight of the sample difficult to classify, so that the detection effect on few classes is improved.
The Focal Loss function is defined as follows:
in the formula (1-p) t ) γ And gamma is more than or equal to 0 as an adjusting factor, and is an adjustable focusing parameter. For samples with accurate classification, p t Approach 1, (1-p) t ) γ Approaching 0, i.e. reducing the weight of the classification accurate sample in the loss function; and simultaneously introducing a weight factor alpha to reduce the weight of the positive sample or the negative sample so as to solve the problem of unbalanced samples.
NMS non-maximum suppression algorithm based on DIoU-NMS distance intersection ratio is adopted to replace NMS non-maximum suppression algorithm of YOLOv5s, specifically, a DIoU loss function is adopted to replace IoU loss function, and the DIoU loss function is defined as
b and b gt Prediction box B and target box B, respectively gt Center point ρ of (1) 2 (b,b gt ) C is the square of the Euclidean distance, and C is the coverage prediction box B and the target box B gt Is defined, the diagonal length of the smallest bounding box of (a).
And step 4, acquiring an image of the crop to be detected, and detecting whether the crop is infected by the plant diseases and insect pests or not by the trained network model.
One specific training procedure of this embodiment is as follows: firstly, dividing a data set into a training set, a verification set and a test set according to the proportion of 8:1:1, wherein the data set comprises 8 different crop diseases and insect pests, then, adaptively filling and scaling an input picture to 640 x 640, and finally, inputting the input picture into a final model for training.
In the training process, the Batch-size is set to be 4, 94 pictures are arranged on each Batch, the initial learning rate is 0.01, and the following pictures are training results after 200 epochs are iterated.
Fig. 8 is a graph showing the result of PR curve, in which the larger the area under the curve is, the larger the area is, the better the model is, and the mAP shows that the area under the curve is 0.919.
Fig. 9 is a result graph of P (precision), which means the probability of actually being a positive sample among all samples predicted to be positive. The accuracy represents the prediction accuracy in the result of the alignment sample, and the larger the value is, the better, and the graph shows that the accuracy is about 0.9 when the confidence is 0.5.
FIG. 10 is a graph of the results of F1 (the harmonic mean of precision and recall), with F1 score taking into account both precision and recall. It can be seen from the graph that when the confidence is 0.327, both reach the highest level at the same time, and balance is achieved.
Fig. 11 is a graph showing the result of R (recall), which means the probability of being predicted as a positive sample among samples that are actually positive, and the larger the value, the better, and it is clear from the graph that the recall is about 0.8 when the confidence is 0.5.
FIG. 12 is a graph of the results of loss and accuracy visualization, showing that the loss function shows a gradual decrease trend during training, the first 80 epoch loss functions decreasing sharply, during which the P, R, mAP parameter increases sharply; the decreasing trend of the loss function is gradually slowed down when 80-200 epochs are performed, and the rising trend of P, R, mAP is slowed down, so that the model is close to converging to an optimal state.
Wherein the losses include a positioning loss, a confidence loss, and a classification loss, the classification loss being a loss resulting from misprediction of a given object class, the smaller the loss function value, the more accurate the classification. After Focal Loss is introduced, the classification Loss function value is obviously reduced before improvement. The accuracy and recall are faster and better than before improvement.
It should be noted that the specific methods of the above-described embodiments may form computer program products, and that the computer program products embodied herein may therefore be stored on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.).

Claims (10)

1. The crop pest real-time detection method based on deep learning is characterized by comprising the following steps of:
step 1, acquiring different pest and disease types and normal images of different crops, preprocessing the collected pest and disease images, and constructing a crop pest and disease database;
step 2, constructing a target detection network model;
step 3, training the target detection network model by utilizing the crop disease and insect pest database obtained in the step 1 to obtain a trained network model;
step 4, acquiring an image of crops to be detected, and detecting whether the crops are infected with diseases and insect pests or not by the trained network model;
wherein the object detection network model is based on YOLOv5s improvement, the improvement comprises replacing the feature extraction backbone network of YOLOv5s by a CBSM module, a shuffle_block module in Shuffle net v2 and an SPPF module of YOLOv5s, leading out two jump connections from the backbone network to two Concat modules in a PAN of a negk network of YOLOv5s by the shuffle_block module, and adding a attention mechanism GAM module between the negk network of YOLOv5s and the head module.
2. The method for detecting crop diseases and insect pests in real time based on deep learning according to claim 1, wherein the shuffle_block module comprises a first shuffle_block module and a second shuffle_block module, the feature extraction backbone network of the replacement yolv5s is composed of a CBSM module, a first shuffle_block module, a second shuffle_block module, a first shuffle_block module and an SPPF module which are sequentially connected, the jump connection is led out by two second shuffle_block modules, the first shuffle_block module is composed of unit2 and unit1×3 modules, the second shuffle_block module is composed of unit2 and unit1×3 modules, when the unit1 is a Shuffle net block 1, the jump connection is led out by two second shuffle_block modules, and the first shuffle_block module is composed of unit2 and unit 2.
3. The method for detecting crop diseases and insect pests in real time based on deep learning according to claim 1 or 2, wherein a convolution module with a convolution kernel size of 1*1, a step size of 1 and a filling of 0 is added after two Concat modules with the jump connection.
4. The method for detecting crop diseases and insect pests in real time based on deep learning according to claim 1, wherein when the target detection network model is trained in the step 3, the data in the crop disease and insect pest database is sequentially subjected to Mosaic data enhancement, adaptive anchor frame calculation and adaptive picture scaling processing and then input into the target detection network model, and when training, a Focal Loss function Focal Loss and a non-maximum suppression algorithm based on a DIoU-NMS distance intersection ratio are used for replacing a NMS non-maximum suppression algorithm of Yolov5 s.
5. The method for detecting crop diseases and insect pests in real time based on deep learning according to claim 4, wherein the Focal Loss function Focal Loss is defined as
Wherein (1-p) t ) γ For adjusting factor, gamma is larger than or equal to 0 and is an adjustable focusing parameter, p t For predictive probability, the value is between 0 and 1.
6. The method for real-time detection of crop diseases and insect pests based on deep learning according to claim 4, wherein the non-maximal value suppression algorithm based on DIoU-NMS distance-to-intersection ratio replaces NMS non-maximal value suppression algorithm of YOLOv5s with DIoU loss function instead of IoU loss function, and the DIoU loss function is defined as
b and b gt Prediction box B and target box B, respectively gt Center point ρ of (1) 2 (b,b gt ) C is the square of the Euclidean distance, and C is the coverage prediction box B and the target box B gt Is defined, the diagonal length of the smallest bounding box of (a).
7. The method for detecting crop diseases and insect pests in real time based on deep learning according to claim 1, wherein the preprocessing in the step 1 comprises rotation, scaling, clipping, shielding, color change, gaussian noise addition and data labeling.
8. Crop pest real-time detection device based on deep learning, characterized by comprising:
and a data acquisition module: collecting different pest and disease types and normal images of different crops, preprocessing the collected pest and disease images, and constructing a crop pest and disease database;
and (3) detecting a network module: constructing a target detection network model;
and the network training module: training the target detection network model by utilizing the crop disease and pest database obtained by the data acquisition module to obtain a trained network model;
and, a detection module: collecting crop images to be detected, and detecting whether the crop is infected by the plant diseases and insect pests or not by the trained network model;
wherein the object detection network model is based on YOLOv5s improvement, the improvement comprises replacing the feature extraction backbone network of YOLOv5s by a CBSM module, a shuffle_block module in Shuffle net v2 and an SPPF module of YOLOv5s, leading out two jump connections from the backbone network to two Concat modules in a PAN of a negk network of YOLOv5s by the shuffle_block module, and adding a attention mechanism GAM module between the negk network of YOLOv5s and the head module.
9. The device for detecting crop diseases and insect pests in real time based on deep learning according to claim 8, wherein the shuffle_block module comprises a first shuffle_block module and a second shuffle_block module, the feature extraction backbone network of the replacement yolv5s is composed of a CBSM module, a first shuffle_block module, a second shuffle_block module, a first shuffle_block module and a SPPF module, which are sequentially connected, the jump connection is led out by two second shuffle_block modules, the first shuffle_block module is composed of unit2 and unit1×3 modules, the second shuffle_block module is composed of unit2 and unit1×3 modules, when the unit1 is a Shuffle net 2, the jump connection is composed of unit2 when the unit1 is a Shuffle net 2, and the jump connection is composed of unit 2.
10. A computer storage medium having stored thereon a computer program, which when executed by a processor, implements the deep learning based crop pest real-time detection method of any one of claims 1 to 7.
CN202311333787.9A 2023-10-16 Crop pest real-time detection method and device based on deep learning and computer storage medium Active CN117351353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311333787.9A CN117351353B (en) 2023-10-16 Crop pest real-time detection method and device based on deep learning and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311333787.9A CN117351353B (en) 2023-10-16 Crop pest real-time detection method and device based on deep learning and computer storage medium

Publications (2)

Publication Number Publication Date
CN117351353A true CN117351353A (en) 2024-01-05
CN117351353B CN117351353B (en) 2024-11-15

Family

ID=

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019164A (en) * 2022-04-21 2022-09-06 青岛鼎信通讯消防安全有限公司 Image type fire detector smoke and fire identification method based on deep learning method
CN115272828A (en) * 2022-08-11 2022-11-01 河南省农业科学院农业经济与信息研究所 Intensive target detection model training method based on attention mechanism
CN115588126A (en) * 2022-09-29 2023-01-10 长三角信息智能创新研究院 GAM, CARAFE and SnIoU fused vehicle target detection method
CN116071701A (en) * 2023-01-13 2023-05-05 昆明理工大学 YOLOv5 pedestrian detection method based on attention mechanism and GSConv
CN116310836A (en) * 2023-04-10 2023-06-23 辽宁工程技术大学 Improved corn leaf disease and pest automatic detection method based on YOLO model
CN116563766A (en) * 2023-06-01 2023-08-08 南京工业大学 Improved YOLOv5 s-based high-voltage transmission line bird nest target detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019164A (en) * 2022-04-21 2022-09-06 青岛鼎信通讯消防安全有限公司 Image type fire detector smoke and fire identification method based on deep learning method
CN115272828A (en) * 2022-08-11 2022-11-01 河南省农业科学院农业经济与信息研究所 Intensive target detection model training method based on attention mechanism
CN115588126A (en) * 2022-09-29 2023-01-10 长三角信息智能创新研究院 GAM, CARAFE and SnIoU fused vehicle target detection method
CN116071701A (en) * 2023-01-13 2023-05-05 昆明理工大学 YOLOv5 pedestrian detection method based on attention mechanism and GSConv
CN116310836A (en) * 2023-04-10 2023-06-23 辽宁工程技术大学 Improved corn leaf disease and pest automatic detection method based on YOLO model
CN116563766A (en) * 2023-06-01 2023-08-08 南京工业大学 Improved YOLOv5 s-based high-voltage transmission line bird nest target detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YONGPING, ZHANG, ET AL: "Strip steel surface defect detection based on lightweight Yolov5", 《FRONT NEUROROBOT》, 4 October 2023 (2023-10-04), pages 1 - 16 *
任安虎: "基于改进YOLOv5s的道路裂缝检测算法", 《激光杂志》, 27 June 2023 (2023-06-27), pages 88 - 94 *

Similar Documents

Publication Publication Date Title
Junos et al. An optimized YOLO‐based object detection model for crop harvesting system
Banerjee et al. Fast and accurate multi-classification of kiwi fruit disease in leaves using deep learning approach
CN113312999B (en) High-precision detection method and device for diaphorina citri in natural orchard scene
CN110765865A (en) Underwater target detection method based on improved YOLO algorithm
CN113435355A (en) Multi-target cow identity identification method and system
CN114693616B (en) Rice disease detection method, device and medium based on improved target detection model and convolutional neural network
Fan et al. A novel sonar target detection and classification algorithm
CN116597224A (en) Potato defect detection method based on improved YOLO V8 network model
CN111783819A (en) Improved target detection method based on region-of-interest training on small-scale data set
CN112149664A (en) Target detection method for optimizing classification and positioning tasks
CN115147648A (en) Tea shoot identification method based on improved YOLOv5 target detection
CN116682106A (en) Deep learning-based intelligent detection method and device for diaphorina citri
CN116912796A (en) Novel dynamic cascade YOLOv 8-based automatic driving target identification method and device
CN116843971A (en) Method and system for detecting hemerocallis disease target based on self-attention mechanism
CN112307984A (en) Safety helmet detection method and device based on neural network
CN115909221A (en) Image recognition method, system, computer device and readable storage medium
Bachhal et al. Real-time disease detection system for maize plants using deep convolutional neural networks
CN118279566B (en) Automatic driving target detection system for small object
CN118172690B (en) Corn leaf plant disease and insect pest detection method based on improved YOLOv model
CN117351353B (en) Crop pest real-time detection method and device based on deep learning and computer storage medium
CN113221853A (en) Yolov 4-based chicken farm feeding identification algorithm
CN112966762A (en) Wild animal detection method and device, storage medium and electronic equipment
Hu et al. Automatic detection of pecan fruits based on Faster RCNN with FPN in orchard
CN117351353A (en) Crop pest real-time detection method and device based on deep learning and computer storage medium
CN113344911B (en) Method and device for measuring size of calculus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant