CN114359727B - Tea disease identification method and system based on lightweight optimization Yolo v4 - Google Patents
Tea disease identification method and system based on lightweight optimization Yolo v4 Download PDFInfo
- Publication number
- CN114359727B CN114359727B CN202111663816.9A CN202111663816A CN114359727B CN 114359727 B CN114359727 B CN 114359727B CN 202111663816 A CN202111663816 A CN 202111663816A CN 114359727 B CN114359727 B CN 114359727B
- Authority
- CN
- China
- Prior art keywords
- yolo
- tea
- disease
- model
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 201000010099 disease Diseases 0.000 title claims abstract description 207
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 207
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000005457 optimization Methods 0.000 title claims abstract description 32
- 241001122767 Theaceae Species 0.000 claims abstract description 172
- 238000012549 training Methods 0.000 claims abstract description 75
- 238000000605 extraction Methods 0.000 claims abstract description 55
- 230000004927 fusion Effects 0.000 claims abstract description 38
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000012795 verification Methods 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 17
- 238000003384 imaging method Methods 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000002372 labelling Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 7
- 238000010835 comparative analysis Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000005286 illumination Methods 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 13
- 238000013135 deep learning Methods 0.000 description 6
- 241000032989 Ipomoea lacunosa Species 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 241000195493 Cryptophyta Species 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 241000238631 Hexapoda Species 0.000 description 1
- 241000607479 Yersinia pestis Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000575 pesticide Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention discloses a tea disease identification method and system based on lightweight optimization Yolo v < 4 >, wherein the method comprises the following steps: collecting tea disease pictures, and preprocessing the pictures to obtain a dataset for training Yolo v model; carrying out lightweight optimization on a feature extraction trunk module and a feature extraction fusion module in the Yolo v model to obtain an optimized Yolo v model; training and verifying the optimized Yolo v model by utilizing a dataset of the training Yolo v4 model to obtain an optimal Yolo v model for identifying tea diseases; and identifying the tea disease image by using the obtained optimal Yolo v model. The invention effectively reduces the huge parameter quantity and model volume of the original Yolo v network model, and improves the detection efficiency and the recognition accuracy of the tea disease target.
Description
Technical Field
The invention relates to the technical field of crop disease target detection, in particular to a tea disease identification method and system based on lightweight optimization Yolo v < 4 >.
Background
The total tea yield of the four tea producing areas in China occupies more than 40% of the global tea yield, and is first in the world. However, multiple diseases of tea parts (such as tea white star disease, tea anthracnose, tea algae spot and the like) which are easy to spread by wind and rain are key factors for seriously influencing the yield of tea. Therefore, the method realizes the rapid identification of tea diseases and the reduction of pesticide dosage in the tea growth period as early as possible, and has important significance for improving the response speed of tea farmers to diseases, assisting the tea farmers in treating the diseases and promoting the intelligent development of the tea industry production in China. Common tea diseases mainly comprise tea white spot, tea anthracnose, tea leaf blight, tea leaf spot and the like. The tea farmers and plant protection experts can easily judge and analyze the tea diseases, are easily influenced by subjective experiences, lack objective quantifiable standards, and have hysteresis at the time point of manually finding the diseases. In recent years, various algorithms and models represented by deep learning greatly promote the application and development of artificial intelligence in the field of crop disease identification, and mainly have two application directions of disease image classification and target disease identification.
At present, in tea disease classification research based on deep learning, researchers often cut collected tea disease pictures into pictures of only one piece of disease tea, and then carry out simple disease classification through classification networks such as AlexNet, VGG16, lightweight MobileNet and the like, which do not fully consider the complexity of mutual shielding of the leaves in a real tea garden scene, unstable imaging quality and real-time requirements for identifying diseases. In recent years, an end-to-end target detection algorithm represented by Yolo V4 has good recognition precision and speed in various specific scenes, but has the defects that the corresponding network model parameter number and calculation amount are overlarge, and the method is difficult to deploy and operate on embedded equipment or mobile terminals with limited GPU (graphic processing unit) computational resources; the distribution of the tea diseases on the tea has higher randomness, and different disease types have certain similarity in the characteristics of shape, color, texture and the like in respective multiple times, so that the false detection rate of the original Yolo V4 algorithm which is not improved and optimized on the tea diseases is higher.
Therefore, how to reduce the huge parameters and model volume of the original Yolo v network model and improve the detection efficiency and recognition accuracy of the tea disease target is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a tea disease identification method and a system based on lightweight optimization Yolo v < 4 >, which remarkably reduce the huge parameters and model volume of an original Yolo v network model, improve the detection efficiency and identification precision performance of a tea disease target in a real tea garden scene, enable the optimized Yolo v < 4 > to be easily transplanted and deployed in embedded equipment or a mobile terminal platform, and assist in tea farm disease control.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the invention discloses a tea disease identification method based on lightweight optimization Yolo v < 4 >, which comprises the following steps: the method comprises the following steps:
s1, collecting tea disease pictures, and preprocessing the pictures to obtain a dataset for training Yolo v model 4;
S2, carrying out lightweight optimization on a feature extraction trunk module and a feature extraction fusion module in the Yolo v model to obtain an optimized Yolo v model;
s3, training and verifying the optimized Yolo v model by utilizing the dataset of the Yolo v4 model trained in the step S1, and obtaining an optimal Yolo v model for identifying tea diseases;
and S4, identifying the tea disease image by using the optimal Yolo v model obtained in the step S3.
Preferably, the step S1 of collecting the tea disease pictures specifically comprises the step of collecting the tea disease pictures of different time points and various illumination conditions in the tea garden by using a handheld digital camera.
Preferably, in step S1, the collected tea disease picture is preprocessed, which specifically includes:
s11, setting a tea disease characteristic imaging threshold, screening the collected tea disease pictures according to the tea disease characteristic imaging threshold, and removing the tea disease pictures with pixel values lower than the tea disease characteristic imaging threshold from the collected tea disease pictures;
S12, classifying the screened tea disease pictures according to tea disease characteristics to obtain disease categories of the tea disease pictures, wherein the tea disease characteristics comprise disease shapes and disease colors;
S13, establishing an original data set according to the classified tea disease pictures, and expanding the tea disease pictures in the original data set by using a similar data enhancement method;
S14, dividing the expanded original data set into a training set, a verification set and a test set;
s15, taking tea disease pictures of different disease categories in a training set as Mixup mixed type enhanced input samples, and obtaining a Mixup mixed type enhanced training set;
S16, labeling yolo format of the tea disease pictures in the training set subjected to Mixup mixed enhancement and the verification set not subjected to Mixup mixed enhancement according to the disease types of the tea disease pictures and the positions of the diseases on the leaves, and generating a data set for training Yolo v4 model according to the training set subjected to yolo format labeling, the verification set and the test set not subjected to yolo format labeling.
Preferably, in step S14, the expanded original data set is subjected to a process of 7:2: the scale of 1 is divided into a training set, a validation set and a test set.
Preferably, the step S2 performs lightweight optimization on the feature extraction trunk module and the feature extraction fusion module in the Yolo v model, and specifically includes:
(1) Adopting a lightweight MobilenetV network as a new feature extraction backbone module of the Yolo v model, replacing a Relu deep activation function of the MobilenetV network with h-swish, and optimizing MobilenetV2, wherein the formula of the activation function h-swish is as follows:
Wherein x represents the characteristic information of tea diseases extracted from MobilenetV network layers and is used as an input node of the corresponding layer;
(2) Adopting a cavity convolution DT-Conv as convolution in a FPN structure of a feature extraction fusion module, adopting a depth separable convolution DW-Conv as convolution in a PAN structure of the feature extraction fusion module, and optimizing the feature extraction fusion module in the Yolo v model;
(3) Extracting feature maps out1, out2 and out3 output by different network layers in MobilenetV after optimization in the method (1), wherein the size of the feature map out1 is 52X 32, the size of the feature map out2 is 26X 96, the size of the feature map out3 is 13X 320, and 32, 96 and 320 are the channel numbers of three feature maps;
(4) The feature map out3 is input into SPPNet of the optimized feature extraction fusion module, and the feature map processed by SPPNet is input into PANet of the feature extraction fusion module together with the feature maps out1 and out 2.
Preferably, in the step (4), the feature map out3 is input to the SPPNet of the optimized feature extraction fusion module for processing, which specifically includes: the feature map out3 is input into SPPNet of the feature extraction fusion module, subjected to (Conv2D_BN_Leaky) X3 three CBL convolutions, (1X 1, 5X5, 9X9, 13X 13) four-scale block pooling processing and Concat splicing.
Preferably, the step S3 specifically includes:
S31, using MobilenetV2 pre-trained weights on the open source dataset as initial weights of the optimized Yolov model;
S32, performing multiple rounds of network training on the optimized Yolo v network model by using a training set in the Yolo v model dataset;
S33, performing supervised learning on the marked disease information by using a verification set in a dataset of a training Yolo v model, providing verification effect feedback for the training set in multiple network training, and timely performing parameter adjustment and overfitting judgment to update the Yolo v model; the initial weight is automatically adjusted in the network training process based on the data set of the training Yolo v to generate the weight after the training is finished;
S34, selecting a plurality of groups of weights after the network training is finished, loading the weights into Yolo v < 4 > model prediction modules, and carrying out prediction effect comparison analysis on tea disease pictures in the test set by utilizing the Yolo v < 4 > model prediction modules to obtain a tea disease optimal Yolo v identification model.
Preferably, in the step S34, the comparison analysis is performed on the tea disease map in the test set, which specifically includes comparison analysis on the identification accuracy, the identification time consumption and the real-time performance of the model of the tea disease.
The invention further discloses a tea disease identification system based on lightweight optimization Yolo v < 4 >, which comprises a user side and a cloud server, wherein the user side is used for collecting tea disease pictures and displaying tea disease identification results; the cloud server is used for training and identifying the tea disease pictures acquired by the user side according to any of the tea disease identification methods based on the lightweight optimization Yolo v, and returning the identification result to the user side.
Compared with the prior art, the invention discloses a tea disease identification method based on lightweight optimization Yolo v4, which has the following beneficial effects:
1. according to the intelligent tea garden disease intelligent management and analysis method, tea disease images of different periods under various illumination conditions in the tea garden are collected in the field, missing tea disease data sets in the existing tea disease identification research based on deep learning are enriched, and data support can be provided for intelligent management and analysis of intelligent tea garden diseases.
2. According to the invention, on the basis of preprocessing of similar data enhancement, the Mixup mixed enhancement method is introduced to linearly process the neighborhood relation between different types of samples, so that a smoother class transition basis can be provided for classifying different disease features by a subsequent feature extraction trunk module.
3. The invention replaces the MobilenetV network deep activation function Relu6 with h-swish, which is helpful for reducing the precision loss when fitting the network deep activation function; meanwhile, SPPNet and PANet in the feature extraction fusion module are reserved, and weakening of feature extraction capability after overall depth of the network is reduced is restrained.
4. According to the invention, the feature extraction trunk module and the feature extraction fusion module with high calculation load in the traditional Yolo v network are subjected to lightweight optimization, the parameter of the Yolo v network model is reduced by 71.2%, the volume of the model is reduced by 81.4%, the requirements of the model on GPU (graphics processing unit) computational power resources and performance are remarkably reduced, the average target detection accuracy value map obtained after the recognition model is trained for 30 epochs is approximately 84% after the 100 epochs are trained, and the detection time for a single disease picture on a test set is about 0.035s.
5. The Mixup mixed type enhancement method and the h-swish activation function substitution introduced by the invention have very small calculation cost, the used Yolo v network model is built based on a PyTorch deep learning framework, and the framework simultaneously supports Windows and Linux operating systems, so that the framework is easy to transplant and deploy in embedded equipment or a mobile terminal platform.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a tea disease identification method provided by the invention;
FIG. 2 is a diagram of a network structure based on Yolo v after lightweight optimization provided by the invention;
FIG. 3 is a diagram showing a SPPNet structure of the lightweight optimized Yolo v-4 feature extraction fusion module according to the present invention;
FIG. 4 is a diagram illustrating a PANet structure of the lightweight optimized Yolo v-4 feature extraction fusion module according to the present invention;
FIG. 5 is a graph showing the average accuracy map of the target detection after 30 or 100 training rounds of Yolo v according to the present invention;
FIG. 6 is a graph showing disease classification loss values of Yolo v4 at the end of 30 or 100 training runs according to the present invention;
FIG. 7 is a schematic structural diagram of a tea disease identification system based on lightweight optimization Yolo v 4;
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1-6, the embodiment of the invention discloses a tea disease identification method based on lightweight optimization Yolo v, which comprises the following steps: the method comprises the following steps:
s1, collecting tea disease pictures, and preprocessing the pictures to obtain a dataset for training Yolo v model 4;
In the embodiment, the tea disease pictures are collected, the tea disease pictures in the real tea garden are collected, the collection site is located in a tea garden of the national academy of agricultural sciences in the Guangzhou city of Guangdong, a digital camera is used for shooting the leaf disease samples at multiple angles, the shooting distance is about 40-50 cm, and the leaves except for the disease in the imaging view of the camera also comprise a plurality of normal and mutually shielded leaves so as to improve the deep learning capability of Yolo v4 in the real disease identification scene. According to the multiple time points of four common diseases of tea, the disease image acquisition time is 2021, 6, 29, 7, 10, 8, 9, 18, 10, 20 and 11, 17; in order to simulate imaging effects under different illumination conditions, the image acquisition time of each day is 9:00-10:00 am, 12:00-12:30 pm and 15:30-17:00 pm.
The pretreatment process specifically comprises the following steps:
S11, setting a tea disease characteristic imaging threshold, screening the collected tea disease pictures according to the tea disease characteristic imaging threshold, and removing the tea disease pictures with pixel values lower than the tea disease characteristic imaging threshold from the collected tea disease pictures; the method mainly uses the existing machine learning means to remove the tea disease pictures with fuzzy pixels and unrecognizable disease characteristics so as to improve the accuracy of the original data set of the method, and does not serve as the main invention content, so that the specific implementation process is not described in detail in the method.
S12, classifying the screened tea disease pictures according to the disease shape, disease color and other tea disease characteristics, and classifying the tea disease pictures into four types of tea white star disease, tea anthracnose, tea leaf blight and tea leaf spot disease mainly according to the disease shape, the disease color and the like and referring to Chinese agricultural press 'third edition of Chinese crop diseases and insect pests' and tea garden expert opinions to a certain extent; in the same way as in the step S11, the original tea disease pictures after screening are automatically classified according to the disease shape, disease color and other tea disease features by adopting the existing machine learning means in the step.
S13, establishing an original data set according to the classified tea disease pictures, and expanding each type of tea disease pictures in the original data set by using a similar data enhancement method; after expansion, each disease picture is about 304, 1216 pieces in total, an initial data set is obtained after uniform mixing, and the specific similar data enhancement method can expand the original various tea disease pictures by adopting horizontal and vertical overturn, brightness change, affine transformation, addition of certain Gaussian noise and the like.
S14, randomly dividing the expanded original data set into a training set, a verification set and a test set according to a certain proportion, wherein the proportion of the training set, the verification set and the test set in the embodiment is 7:2:1, a step of;
s15, taking tea disease pictures of different disease categories in a training set as Mixup mixed type enhanced input samples, and obtaining a Mixup mixed type enhanced training set;
The similar data enhancement method can only simply expand the total amount of the data set, and the Mixup mixed type enhancement method can provide a smoother type transition basis for classifying different disease features by the subsequent feature extraction trunk module through linear processing of neighborhood relations among different types of samples. The Mix data enhancement method comprises the following principles:
Mixedx=λ*samplex1+(1-λ)*samplex2
Mixedy=λ*sampley1+(1-λ)*sampley2
Wherein samplex, samplex2 refer to two input samples in the training set, sampley, sampley are labels corresponding to the input samples, λ is a coefficient conforming to beta distribution, and is used for mixing enhancement of the two input samples in proportion, in this embodiment, λ takes a value of 0.2, and (Mixedx, mixedy) are the samples and labels after mixing enhancement.
In order to further reduce the false detection rate of the original Yolo V model on the tea diseases, a tea disease picture with certain similarity in a certain period of tea disease characteristics can be used as an input sample for Mixup mixed enhancement, for example, round disease spots presented in the early stage of the infection of the tea white star disease and the tea algae spot disease are similar in shape, gray cyan disease spots presented in the later stage of the tea anthracnose disease and gray moire disease spots presented in the later stage of the tea leaf blight are similar in color, so that the tea white star disease and the tea algae spot disease are used as a pair of input samples, the tea anthracnose disease and the tea leaf blight are used as a pair of input samples, and the mixed enhancement is carried out on the two types of input samples through Mixup.
S16, marking the reinforced training set and the tea disease pictures in the verification set which are not reinforced by Mixup mixed types according to the disease types of the tea disease pictures and the positions of the disease on the leaves in yolo format, and generating a data set for training a Yolo v model according to the reinforced training set, the verification set and the test set which are not marked after marking in yolo format.
In this embodiment, a LabelImg visual image labeling tool is used to label the disease graphs in the enhanced training set and verification set in yolo format, and each disease graph after labeling corresponds to txt files containing labeling information one by one to form a dataset finally used for training Yolo v.
S2, carrying out lightweight optimization on a feature extraction trunk module and a feature extraction fusion module in the Yolo v model to obtain an optimized Yolo v model;
Specifically, S2 may include the following methods:
(1) The feature extraction backbone module CSPDARKNET53 of the original Yolov4 is mainly responsible for performing preliminary feature extraction on the picture, and obtains three effective feature graphs input to the feature extraction fusion module. CSPDARKNET53 by taking the thought of increasing the network depth of the residual network as reference, a large number of residual blocks and residual edges are constructed, the number of convolution layers is up to 72, and the parameter quantity is huge.
The invention adopts a lightweight MobilenetV network as a novel feature extraction backbone module of a Yolo v model, and replaces a Relu deep activation function of the MobilenetV network with h-swish, wherein the formula of the activation function h-swish is as follows:
In the formula, the activation function h-swish can limit the node value x of the input of the upper layer in the network between 0 and 1, is based on the further approximation of Relu to limit the input between 0 and 6, and is beneficial to reducing the precision loss during the fitting of the deep activation function of the network.
In the embodiment, the lightweight MobilenetV2 is used as a novel feature extraction backbone module of the Yolo v model, the residual structure of MobilenetV2 with the dimension rising and dimension falling can extract effective features of diseases on a relatively stable high-dimension space, and the transmission of feature information is enhanced under the condition that the total network layer number is reduced; meanwhile, mobilenetV is similar to the idea that the conventional full-connection layer is cancelled to reduce the parameters, and the idea that the original feature extraction backbone module CSPDARKNET deletes the full-connection layer after the convolution layer is similar to the idea that MobilenetV is used as a new feature extraction module to replace CSPDARKNET in theory.
(2) The original Yolov4 feature extraction and fusion module consists of SPPNet and PANet shown in fig. 2, and is mainly used for performing feature enhancement extraction and fusion on three input feature graphs.
In order to further optimize the feature extraction fusion module Yolov, the invention replaces the common convolution in the FPN structure in PANet by adopting the cavity convolution DT-Conv, and replaces the common convolution in the PAN structure in PANet by adopting the depth separable convolution DW-Conv, so as to obtain the optimized feature extraction fusion module;
Specifically, in this embodiment, five times of common convolutions in the fusion module FPN structure are replaced by the cavity convolution DT-Conv, the length and width dimensions of the image are enlarged in the process of sampling the transmission features in the FPN structure, important features are easily lost for a disease map with noise and poor imaging quality, and the cavity convolution DT-Conv expands the receptive field under the condition that the size of the convolution kernel is not actually increased by injecting cavities into the traditional convolution kernel, so as to strengthen the extraction of multi-scale features of the disease;
and the common convolution in the PAN structure of the fusion module is extracted by adopting the depth separable convolution DW-Conv substitution feature, so that the huge convolution operation amount and parameter amount are further reduced and optimized.
(3) Respectively extracting feature graphs out1, out2 and out3 output by the optimized MobilenetV different network layers in the method (1), taking out1 feature graphs (size 52X 32), out2 feature graphs (size 26X 96) and out3 feature graphs (size 13X 320) respectively output by a 4 th layer, a 6 th layer and an 8 th layer of MobilenetV2 as inputs of a feature extraction fusion module, wherein 32, 96 and 320 are channel numbers of three feature graphs, and the size in the invention refers to the pixel value of the feature graphs;
(4) The feature map out3 is input into SPPNet of the optimized feature extraction fusion module, and the feature map output after SPPNet processing is input into PANet of the feature extraction fusion module together with the out1 feature map and the out2 feature map.
Specifically, as shown in fig. 3, an out3 feature map is input into SPPNet of a feature extraction fusion module, and after (conv2d_bn_leak) X3 CBL convolution, (1X 1, 5X5, 9X9, 13X 13) four-scale block pooling processing and Concat splicing, the output feature map and out1, out2 feature map are input into PANet of the feature extraction fusion module together; SPPNet forms a spatial pyramid pooling layer through multi-scale blocking pooling, and can further comprehensively extract disease features extracted by the trunk module MobilenetV in different dimensions.
(5) PANet (shown in fig. 4) of the optimized feature extraction fusion module, firstly, sequentially up-sampling the input out3 feature map in the FPN structure in the optimized feature extraction fusion module twice, and splicing the result with the out2 feature map and the out1 feature map respectively, and then carrying out cavity convolution; and then downsampling, splicing and depth separable convolution are carried out in the PAN structure from bottom to top, and the result is input to the Yolov prediction module after the three feature maps are fully fused.
S3, training and verifying the optimized Yolo v model by utilizing the dataset of the Yolo v-4 model trained in the step S1, and obtaining an optimal Yolo v identification model of the tea diseases;
Specifically, S3 includes the following steps:
S31, using MobilenetV2 weights trained on an open source PASCAL VOC2007 data set as initial weights of an optimization model;
In this embodiment, the training set for training Yolo v a model is a final tea disease data set after mixed enhancement, the training set is used to train a lightweight optimized Yolo v network model, and based on the idea of transfer learning, the weight trained by MobilenetV2 on an open source pasal VOC2007 data set is used as the initial weight of the optimized Yolo v network model, so that the possible loss value fluctuation in the training of the network from scratch is avoided, and the convergence performance of the network is improved.
S32, training the optimized Yolo v model for multiple times by utilizing a training set in the data set;
S33, performing supervised learning on the marked disease information by using a verification set in a dataset of a training Yolo v model, providing verification effect feedback for the training set in multiple network training, and timely performing parameter adjustment and overfitting judgment to update the Yolo v model; the initial weight is automatically adjusted in the network training process based on the data set of the training Yolo v to generate the weight after the training is finished;
S34, selecting a plurality of groups of weights after the network training is finished, loading the weights into Yolo v model prediction modules, performing prediction effect comparison analysis on a test set in a data set for training Yolo v models by using the prediction modules, and acquiring an optimal Yolo v identification model of the tea diseases in the embodiment after determining the proper weights; the specific content of the comparative analysis in the embodiment comprises the comparative analysis of the identification accuracy performance, the identification time consumption, the real-time performance of the model and the like of the tea diseases.
Wherein, as shown in fig. 2, the Yolo v-4 model prediction module consists of three Yolo Head, the three Yolo Head final output matrix scales are 52X 27, 26X 27 and 13X 27, respectively, the third number 27 represents 3X (4+5), 3 represents 3 candidate boxes, 4 represents 4 disease types in the dataset used in the example of the present invention, and 5 represents 5 adjustable parameters of the candidate boxes: x, y axis coordinate offset, height H, width W, and confidence. And judging the confidence coefficient of the object in the candidate frame by the candidate frame, optimizing and adjusting the coordinates, the width, the height and other parameters of the candidate frame by a Yolo v prediction module, and drawing a final disease feature prediction frame on the picture of the test set.
The experimental environment and hardware platform used in the training and verification of the network model in this embodiment specifically include: the operating system is WIN10, based on Pytorch deep learning framework, the CPU is Intel (R) Core (TM) i7-6800K of 6-Core 12 threads, the GPU is two GTX1080Ti, and CUDA is used for accelerating network training; the training runs epoch were 30 and 100, respectively, for comparative analysis of network performance.
S4, identifying the tea disease image by using the optimal Yolo v4 identification model of the tea disease obtained in the step S3, and marking the position of the disease on the blade and the disease type in a rectangular prediction frame mode
As shown in fig. 5 and 6, actual network training and test results show that the Mixup mixed type enhancement method introduced by the invention overcomes the defect that the false detection rate of the original Yolo V on the tea diseases in a real tea garden scene is high, the lightweight optimization method adopted by the invention remarkably lightens the requirements of the model on GPU computing power resources and performance, the detection efficiency and recognition accuracy performance of the model on the tea disease targets are further improved, and the average target detection accuracy value map obtained after 30 epoch training is approximately 84% after 100 epoch training is completed.
It will be appreciated by those skilled in the art that the embodiments described in the present method may further be provided as a system, or a computer program product, comprising the present method.
As shown in fig. 7, another aspect of the embodiment of the present invention discloses a tea disease identification system based on lightweight optimization Yolo v, which collects a tea disease picture in a natural state through a user side and uploads the tea disease picture to a cloud server, the cloud server trains the collected tea disease picture by using an optimized Yolo V model, identifies a tea disease image by using a trained optimal Yolo v identification model, identifies a disease type and a specific position of a disease, returns an identified result to the user side, and the user side is used for displaying the identification result and giving a corresponding disease control strategy.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (7)
1. The tea disease identification method based on lightweight optimization Yolo v4 is characterized by comprising the following steps of:
s1, collecting tea disease pictures, and preprocessing the pictures to obtain a dataset for training Yolo v model 4; wherein, carry out the preliminary treatment to the tealeaves disease picture of gathering, specifically include:
s11, setting a tea disease characteristic imaging threshold, screening the collected tea disease pictures according to the tea disease characteristic imaging threshold, and removing the tea disease pictures with pixel values lower than the tea disease characteristic imaging threshold from the collected tea disease pictures;
S12, classifying the screened tea disease pictures according to tea disease characteristics to obtain disease categories of the tea disease pictures, wherein the tea disease characteristics comprise disease shapes and disease colors;
S13, establishing an original data set according to the classified tea disease pictures, and expanding the tea disease pictures in the original data set by using a similar data enhancement method;
S14, dividing the expanded original data set into a training set, a verification set and a test set;
s15, taking tea disease pictures of different disease categories in a training set as Mixup mixed type enhanced input samples, and obtaining a Mixup mixed type enhanced training set;
s16, labeling yolo format of the tea disease pictures in a training set subjected to Mixup mixed enhancement and a verification set not subjected to Mixup mixed enhancement according to the disease types of the tea disease pictures and the positions of the diseases on the leaves, and generating a data set for training a Yolo v4 model according to the training set subjected to yolo format labeling, the verification set and a test set not subjected to yolo format labeling;
S2, carrying out lightweight optimization on a feature extraction trunk module and a feature extraction fusion module in the Yolo v model to obtain an optimized Yolo v model; the lightweight optimization is performed on a feature extraction trunk module and a feature extraction fusion module in the Yolo v model, and the method specifically comprises the following steps:
(1) Adopting a lightweight MobilenetV network as a new feature extraction backbone module of the Yolo v model, replacing a Relu deep activation function of the MobilenetV network with h-swish, and optimizing MobilenetV2, wherein the formula of the activation function h-swish is as follows:
Wherein x represents the characteristic information of the tea diseases extracted from MobilenetV network layers;
(2) Adopting a cavity convolution DT-Conv as convolution in a FPN structure of a feature extraction fusion module, adopting a depth separable convolution DW-Conv as convolution in a PAN structure of the feature extraction fusion module, and optimizing the feature extraction fusion module in the Yolo v model;
(3) Extracting feature maps out1, out2 and out3 output by different network layers in MobilenetV after optimization in the step (1), wherein the size of the feature map out1 is 52X 32, the size of the feature map out2 is 26X 96, the size of the feature map out3 is 13X 320, and 32, 96 and 320 are the channel numbers of three feature maps;
(4) Inputting the feature map out3 into SPPNet of the optimized feature extraction fusion module, processing the feature map with SPPNet, and inputting the output feature map and the feature maps out1 and out2 into PANet of the feature extraction fusion module;
s3, training and verifying the optimized Yolo v model by utilizing the dataset of the Yolo v4 model trained in the step S1, and obtaining an optimal Yolo v model for identifying tea diseases;
and S4, identifying the tea disease image by using the optimal Yolo v model obtained in the step S3.
2. The method for identifying tea diseases based on lightweight optimization Yolo v and 4 as claimed in claim 1, wherein the step S1 of collecting tea disease pictures includes collecting tea disease pictures of different time points and various illumination conditions in a tea garden by using a handheld digital camera.
3. The method for identifying tea leaf diseases based on lightweight optimization Yolo v4 as claimed in claim 1, wherein the step S14 is performed on the expanded raw dataset according to 7:2: the scale of 1 is divided into a training set, a validation set and a test set.
4. The method for identifying tea diseases based on lightweight optimization Yolo v and 4 as claimed in claim 1, wherein in step (4), the feature map out3 is input into SPPNet of the optimized feature extraction fusion module for processing, specifically comprising: the feature map out3 is input into SPPNet of the feature extraction fusion module, and is subjected to three-time CBL convolution, four-scale block pooling processing and Concat splicing.
5. The method for identifying tea diseases based on lightweight optimization Yolo v and 4 as claimed in claim 1, wherein step S3 specifically comprises:
S31, using MobilenetV2 pre-trained weights on the open source dataset as initial weights of the optimized Yolo v model;
S32, performing multiple rounds of network training on the optimized Yolo v network model by using a training set in the Yolo v model dataset;
S33, performing supervised learning on the marked disease information by using a verification set in a dataset of a training Yolo v model, providing verification effect feedback for the training set in multiple network training, and timely performing parameter adjustment and overfitting judgment to update the Yolo v model; the initial weight is automatically adjusted in the network training process based on the data set of the training Yolo v to generate the weight after the training is finished;
s34, selecting a plurality of groups of weights after the network training is finished, loading the weights into Yolo v < 4 > model prediction modules, and carrying out prediction effect comparison analysis on the tea disease pictures in the test set by utilizing the Yolo v < 4 > model prediction modules to obtain the optimal Yolo v tea disease identification model.
6. The method for identifying tea diseases based on lightweight optimization Yolo v and Yolo v as defined in claim 5, wherein in step S34, the tea disease map in the test set is subjected to comparative analysis, specifically including comparative analysis on identification accuracy, identification time consumption and real-time performance of the model of the tea diseases.
7. The tea disease identification system based on the lightweight optimization Yolo v4 is characterized by comprising a user side and a cloud server, wherein the user side is used for collecting tea disease pictures and displaying tea disease identification results; the cloud server is used for training and identifying the tea disease pictures acquired by the user side according to the tea disease identification method based on the lightweight optimization Yolo v and 4, and returning the identification result to the user side.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111663816.9A CN114359727B (en) | 2021-12-31 | 2021-12-31 | Tea disease identification method and system based on lightweight optimization Yolo v4 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111663816.9A CN114359727B (en) | 2021-12-31 | 2021-12-31 | Tea disease identification method and system based on lightweight optimization Yolo v4 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114359727A CN114359727A (en) | 2022-04-15 |
CN114359727B true CN114359727B (en) | 2024-06-04 |
Family
ID=81105540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111663816.9A Active CN114359727B (en) | 2021-12-31 | 2021-12-31 | Tea disease identification method and system based on lightweight optimization Yolo v4 |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114359727B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972208B (en) * | 2022-05-05 | 2024-04-09 | 扬州大学 | YOLOv 4-based lightweight wheat scab detection method |
CN114758345B (en) * | 2022-05-07 | 2024-10-29 | 成都鸿钰网络科技有限公司 | Yolo V5-based smoke and fire monitoring azimuth extraction method and system |
CN114943697A (en) * | 2022-05-16 | 2022-08-26 | 福建农林大学 | Tomato maturity detection method based on lightweight improved YOLO |
CN114782455B (en) * | 2022-06-27 | 2022-09-09 | 中国科学院合肥物质科学研究院 | Cotton row center line image extraction method for agricultural machine embedded equipment |
CN115100439B (en) * | 2022-07-15 | 2022-11-04 | 广东瑞恩科技有限公司 | Water body abnormity detection method for aquaculture |
CN114998324A (en) * | 2022-07-19 | 2022-09-02 | 深圳新视智科技术有限公司 | Training method and device for semiconductor wafer defect detection model |
CN115272701B (en) * | 2022-08-11 | 2023-08-22 | 电子科技大学 | Ship target detection method based on self-adaptive feature extraction and decoupling prediction head |
CN115205637B (en) * | 2022-09-19 | 2022-12-02 | 山东世纪矿山机电有限公司 | Intelligent identification method for mine car materials |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2020102885A4 (en) * | 2020-10-20 | 2020-12-17 | Xijing University | Disease recognition method of winter jujube based on deep convolutional neural network and disease image |
CN112464910A (en) * | 2020-12-18 | 2021-03-09 | 杭州电子科技大学 | Traffic sign identification method based on YOLO v4-tiny |
CN113312999A (en) * | 2021-05-19 | 2021-08-27 | 华南农业大学 | High-precision detection method and device for diaphorina citri in natural orchard scene |
CN113537244A (en) * | 2021-07-23 | 2021-10-22 | 深圳职业技术学院 | Livestock image target detection method and device based on light-weight YOLOv4 |
CN113609938A (en) * | 2021-07-23 | 2021-11-05 | 华南农业大学 | Rice disease and insect pest identification method based on improved SSD network |
CN113627258A (en) * | 2021-07-12 | 2021-11-09 | 河南理工大学 | Apple leaf pathological detection method |
-
2021
- 2021-12-31 CN CN202111663816.9A patent/CN114359727B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2020102885A4 (en) * | 2020-10-20 | 2020-12-17 | Xijing University | Disease recognition method of winter jujube based on deep convolutional neural network and disease image |
CN112464910A (en) * | 2020-12-18 | 2021-03-09 | 杭州电子科技大学 | Traffic sign identification method based on YOLO v4-tiny |
CN113312999A (en) * | 2021-05-19 | 2021-08-27 | 华南农业大学 | High-precision detection method and device for diaphorina citri in natural orchard scene |
CN113627258A (en) * | 2021-07-12 | 2021-11-09 | 河南理工大学 | Apple leaf pathological detection method |
CN113537244A (en) * | 2021-07-23 | 2021-10-22 | 深圳职业技术学院 | Livestock image target detection method and device based on light-weight YOLOv4 |
CN113609938A (en) * | 2021-07-23 | 2021-11-05 | 华南农业大学 | Rice disease and insect pest identification method based on improved SSD network |
Also Published As
Publication number | Publication date |
---|---|
CN114359727A (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114359727B (en) | Tea disease identification method and system based on lightweight optimization Yolo v4 | |
CN105718945B (en) | Apple picking robot night image recognition method based on watershed and neural network | |
WO2020007363A1 (en) | Method and apparatus for identifying number of targets, and computer-readable storage medium | |
CN111696101A (en) | Light-weight solanaceae disease identification method based on SE-Inception | |
CN110479636B (en) | Method and device for automatically sorting tobacco leaves based on neural network | |
CN108416266A (en) | A kind of video behavior method for quickly identifying extracting moving target using light stream | |
CN109325495A (en) | A kind of crop image segmentation system and method based on deep neural network modeling | |
CN117372881B (en) | Intelligent identification method, medium and system for tobacco plant diseases and insect pests | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
Chen et al. | Agricultural remote sensing image cultivated land extraction technology based on deep learning | |
Gao et al. | A mobile application for plant recognition through deep learning | |
CN115984698A (en) | Litchi fruit growing period identification method based on improved YOLOv5 | |
CN115115830A (en) | Improved Transformer-based livestock image instance segmentation method | |
CN113077452A (en) | Apple tree pest and disease detection method based on DNN network and spot detection algorithm | |
Sulistyo et al. | Building a globally optimized computational intelligent image processing algorithm for on-site inference of nitrogen in plants | |
CN114898359B (en) | Litchi plant diseases and insect pests detection method based on improvement EFFICIENTDET | |
CN118097372B (en) | Crop growth visual prediction method based on artificial intelligence | |
CN113011506A (en) | Texture image classification method based on depth re-fractal spectrum network | |
CN108280516A (en) | The optimization method of Intelligent evolution is mutually won between a kind of multigroup convolutional neural networks | |
Jin et al. | An improved mask r-cnn method for weed segmentation | |
CN117253192A (en) | Intelligent system and method for silkworm breeding | |
CN113449712B (en) | Goat face identification method based on improved Alexnet network | |
CN107133634B (en) | Method and device for acquiring plant water shortage degree | |
CN109919964A (en) | The method that Gaussian Background modeling technique based on mathematical morphology carries out image procossing | |
CN114463741A (en) | Litchi disease and insect pest identification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |