CN118172676B - A method for detecting farmland pests based on quantum deep learning - Google Patents
A method for detecting farmland pests based on quantum deep learning Download PDFInfo
- Publication number
- CN118172676B CN118172676B CN202410584757.3A CN202410584757A CN118172676B CN 118172676 B CN118172676 B CN 118172676B CN 202410584757 A CN202410584757 A CN 202410584757A CN 118172676 B CN118172676 B CN 118172676B
- Authority
- CN
- China
- Prior art keywords
- image
- quantum
- grayscale
- pest
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 241000607479 Yersinia pestis Species 0.000 title claims abstract description 124
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013135 deep learning Methods 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 35
- 238000003708 edge detection Methods 0.000 claims abstract description 20
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 13
- 239000000575 pesticide Substances 0.000 claims abstract description 5
- 239000013598 vector Substances 0.000 claims description 26
- 238000012549 training Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 12
- 238000005259 measurement Methods 0.000 claims description 12
- 239000002096 quantum dot Substances 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 230000007704 transition Effects 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 230000005283 ground state Effects 0.000 claims description 4
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000007921 spray Substances 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 3
- 238000005507 spraying Methods 0.000 abstract 1
- 241000238631 Hexapoda Species 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 230000006378 damage Effects 0.000 description 6
- 238000005457 optimization Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 241000256259 Noctuidae Species 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 244000025254 Cannabis sativa Species 0.000 description 3
- 241000196324 Embryophyta Species 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 241001179564 Melanaphis sacchari Species 0.000 description 2
- 241001477931 Mythimna unipuncta Species 0.000 description 2
- 241000209140 Triticum Species 0.000 description 2
- 235000021307 Triticum Nutrition 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 241001136249 Agriotes lineatus Species 0.000 description 1
- 241001124076 Aphididae Species 0.000 description 1
- 241001397056 Calamobius filum Species 0.000 description 1
- 241000255925 Diptera Species 0.000 description 1
- 241001057636 Dracaena deremensis Species 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 241000555303 Mamestra brassicae Species 0.000 description 1
- 241000346285 Ostrinia furnacalis Species 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 240000008042 Zea mays Species 0.000 description 1
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 1
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 235000013339 cereals Nutrition 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 235000005822 corn Nutrition 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 244000000010 microbial pathogen Species 0.000 description 1
- 230000003071 parasitic effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及量子深度学习技术领域,尤其是一种基于量子深度学习的农田害虫检测方法。The present invention relates to the field of quantum deep learning technology, and in particular to a method for detecting farmland pests based on quantum deep learning.
背景技术Background Art
图像识别方法检测害虫最早于19世纪60年代的美国,采用机器视觉技术对谷蠹的成虫进行离线检测。该项技术对主要储粮害虫的识别率可达 90%,但残缺粮粒、草籽、害虫的形态等因素对害虫的正确识别仍具有很大影响。Image recognition methods for detecting pests were first used in the United States in the 1860s, using machine vision technology to detect adult grain borers offline. This technology can identify 90% of the main stored-grain pests, but factors such as damaged grains, grass seeds, and the morphology of the pests still have a great impact on the correct identification of pests.
现阶段,农田害虫的检测主要有两种形式,一种是传统的目标检测算法,另一种则是基于深度学习的算法检测。传统的目标检测过程大体分为候选区域选定、特征提取和类别判断。虽然传统的算法中为了选取候选区域、特征提取以及分类设计了许多算子和保持泛化的方法,但是对于模型的改善并不明显,仍存在以下几大问题:(1) 生成的候选框有大量冗余,加大了计算量;(2) 滑动窗口不能适应目标尺度的变换; (3) 特征算子只提取了低维的视觉特征,丢失了图片中目标与环境间语义关系。深度学习技术的发展和运用使目标检测的精度和效率有了巨大的提升,但是仍然存在计算量大、硬件成本较高、模型设计复杂等方面的问题。由此可见,如何提高检测效率以及如何自动准确地检测农田害虫是目前亟需解决的问题。At present, there are two main forms of farmland pest detection: one is the traditional target detection algorithm, and the other is the algorithm detection based on deep learning. The traditional target detection process is roughly divided into candidate region selection, feature extraction and category judgment. Although many operators and generalization methods are designed in the traditional algorithm for selecting candidate regions, feature extraction and classification, the improvement of the model is not obvious. There are still several major problems: (1) The generated candidate boxes have a lot of redundancy, which increases the amount of calculation; (2) The sliding window cannot adapt to the transformation of the target scale; (3) The feature operator only extracts low-dimensional visual features and loses the semantic relationship between the target and the environment in the image. The development and application of deep learning technology has greatly improved the accuracy and efficiency of target detection, but there are still problems such as large amount of calculation, high hardware cost and complex model design. It can be seen that how to improve the detection efficiency and how to automatically and accurately detect farmland pests are currently urgent problems to be solved.
对于卷积神经网络,需要大量的数据集去训练这些卷积核。对于训练数据存在不同的尺度,需要进行归一化,将他们归一到相同尺度进行训练。故卷积神经网络具有训练时间久,训练模型多且冗杂的特点。For convolutional neural networks, a large amount of data sets are needed to train these convolution kernels. Since the training data has different scales, they need to be normalized to the same scale for training. Therefore, convolutional neural networks have the characteristics of long training time, multiple and complicated training models.
发明内容Summary of the invention
为了克服现有技术中存在的上述问题,本发明提出一种基于量子深度学习的农田害虫检测方法。In order to overcome the above problems existing in the prior art, the present invention proposes a method for detecting farmland pests based on quantum deep learning.
本发明解决其技术问题所采用的技术方案是:一种基于量子深度学习的农田害虫检测方法,具体包括如下步骤:The technical solution adopted by the present invention to solve the technical problem is: a method for detecting farmland pests based on quantum deep learning, which specifically includes the following steps:
步骤1,采集被检测的害虫图像,利用新型增强量子表示模型将采集的被检测害虫图像转化为量子图像;Step 1, collecting images of detected pests, and converting the collected images of detected pests into quantum images using a novel enhanced quantum representation model;
步骤2,对步骤1所得的害虫图像进行预处理,并将图像位置信息存储于量子系统中;Step 2, preprocessing the pest image obtained in step 1, and storing the image position information in the quantum system;
步骤3,搭建改进的深度卷积神经网络对害虫图像进行分类识别,若属于害虫,则将害虫图像输入图像分割网络中,重建害虫图像;Step 3: Build an improved deep convolutional neural network to classify and identify pest images. If they are pests, input the pest image into the image segmentation network to reconstruct the pest image.
步骤4,将步骤3所得的图像利用新型增强量子表示模型进行优化;Step 4, optimizing the image obtained in step 3 using the new enhanced quantum representation model;
步骤5,提取害虫图像的数据特征,并将具有相同数据特征的害虫进行分类,建立农田害虫分类数据库;Step 5, extracting data features of pest images, and classifying pests with the same data features to establish a farmland pest classification database;
步骤6,采集有害虫农田的图像,利用像素灰度关联进行图片边缘检测,根据步骤5所得的农田害虫分类数据库,确定农田每一区域害虫的种类和数量,集中喷洒相对应的农药。Step 6: Collect images of farmland with pests, use pixel grayscale association to detect image edges, determine the types and numbers of pests in each area of the farmland based on the farmland pest classification database obtained in step 5, and spray corresponding pesticides in a centralized manner.
上述的一种基于量子深度学习的农田害虫检测方法,所述步骤1新型增强量子表示模型表达式为:In the above-mentioned method for detecting farmland pests based on quantum deep learning, the expression of the novel enhanced quantum representation model in step 1 is:
; ;
其中,表示灰度图像;表示可用于编码的二进制灰度信息;n代表量子比特位的长度;表示量子计算中的张量积计算。in, Represents a grayscale image; Represents binary grayscale information that can be used for encoding; n represents the length of the quantum bit; Represents a tensor product calculation in quantum computing.
上述的一种基于量子深度学习的农田害虫检测方法,所述步骤3中深度卷积神经网络为优化的YOLOv5模型,所述优化的YOLOv5模型包括输入端、支持层、融合特征层和输出层,输入端采用马赛克算法来实现数据增强;支持层利用聚焦结构切割输入图像,把图像切割成四份,通过纵向通道拼接后做卷积运算;融合特征层中路径聚合网络负责进行特征融合,在特征提取器基础上加入信息的流通路;模型训练过程中,采用随机梯度下降算法更新模型参数。In the above-mentioned method for detecting farmland pests based on quantum deep learning, the deep convolutional neural network in step 3 is an optimized YOLOv5 model, and the optimized YOLOv5 model includes an input end, a support layer, a fusion feature layer and an output layer. The input end uses a mosaic algorithm to achieve data enhancement; the support layer uses a focusing structure to cut the input image, cuts the image into four parts, and performs a convolution operation after splicing through longitudinal channels; the path aggregation network in the fusion feature layer is responsible for feature fusion, and adds an information flow path based on the feature extractor; during the model training process, the stochastic gradient descent algorithm is used to update the model parameters.
上述的一种基于量子深度学习的农田害虫检测方法,所述步骤3具体包括:将步骤2所得的预处理后的图像作为优化的YOLOv5模型的输入,对每一帧的输入图像,将其表示为x;In the above-mentioned method for detecting farmland pests based on quantum deep learning, the step 3 specifically includes: using the preprocessed image obtained in step 2 as the input of the optimized YOLOv5 model, and representing each frame of the input image as x;
优化的YOLOv5模型的输出包括目标害虫位置的坐标信息和类别概率;对于每个输入图像x, 得到的输出为;The output of the optimized YOLOv5 model includes the coordinate information of the target pest location and the category probability; for each input image x, the output is ;
其中, m表示在该图像中检测到的目标数量,表示第i个目标的预测结果;每个目标预测结果包括目标位置的边界框坐标及对应的类别概率,其目标位置坐标公式:Among them, m represents the number of targets detected in the image. Represents the prediction result of the i-th target; each target prediction result Bounding box coordinates including the target location And the corresponding class probability , the target position coordinate formula is:
; ;
其中,表示边界框的左上角坐标,、分别表示边界框的宽度和高度;类别概率公式如下:;in, represents the coordinates of the upper left corner of the bounding box, , Represent the width and height of the bounding box respectively; the category probability formula is as follows: ;
其中,c表示目标害虫类别的数量,表示第i个目标属于第j个类别的概率。Where c represents the number of target pest categories, It represents the probability that the i-th target belongs to the j-th category.
上述的一种基于量子深度学习的农田害虫检测方法,所述步骤4中具体包括:The above-mentioned method for detecting farmland pests based on quantum deep learning, wherein step 4 specifically includes:
步骤4.1,将数字图像转化为量子图像:图像的尺寸为, 灰度范围为,则NEQR模型表示如下:Step 4.1, convert the digital image into a quantum image: the image size is , the grayscale range is , then the NEQR model is expressed as follows:
; ;
式中,为图像量子态;In the formula, is the image quantum state;
; ;
图像包括两部分,即和分别表示存储图像的灰度信息和位置信息;位置信息由垂直方向坐标和水平方向坐标组成,具体表达式为:image It consists of two parts, namely and Respectively represent the grayscale information and position information of the stored image; position information It consists of vertical coordinates and horizontal coordinates, and the specific expression is:
; ;
其中,前n个量子比特存储图像的垂直坐标信息,后n个量子比特存储图像的水平坐标信息;Among them, the first n quantum bits store the vertical coordinate information of the image, and the last n quantum bits store the horizontal coordinate information of the image;
二值图像的NEQR模型表示形式为:The NEQR model representation of a binary image is:
; ;
式中,表示存储图像的二值信息;表示存储图像的位置信息;In the formula, Indicates the binary information of the stored image; Indicates the location information of the stored image;
步骤4.2,采用量子逻辑门对存储状态进行转化:量子系统的初始状态为:Step 4.2, use quantum logic gates to transform the storage state: the initial state of the quantum system is:
; ;
应用H门变换,对每一位代表位置信息的量子比特分别进行H变换,可产生拥有个状态的叠加态系统,即:Applying H-gate transformation, each qubit representing position information is transformed by H-gate transformation, which can generate The superposition system of states is:
; ;
按照实际图像的像素信息,根据坐标与灰度信息的映射关系,将量子图像中代表灰度信息的初始态序列转换为所对应的M态,实现量子图像中位置信息与灰度信息唯一映射的过程,具体表达式为:According to the pixel information of the actual image and the mapping relationship between coordinates and grayscale information, the initial state sequence representing the grayscale information in the quantum image is converted into the corresponding M state, realizing the unique mapping process between the position information and the grayscale information in the quantum image. The specific expression is:
; ;
其中,n表示量子比特位的长度;代表的是量子灰度图像系统中所有的基态;表示量子计算中的张量积计算;表示可用于编码的二进制灰度信息;表示灰度图像;Where n represents the length of the quantum bit; It represents all the ground states in the quantum grayscale image system; Represents tensor product calculation in quantum computing; Represents binary grayscale information that can be used for encoding; Represents a grayscale image;
步骤4.3,对图像量子态进行量子测量:位置信息测量算子为:Step 4.3, perform quantum measurement on the image quantum state: position information measurement operator for:
; ;
式中,表示q个单位矩阵的直积;In the formula, represents the direct product of q identity matrices;
使用投影测量恢复来自量子态的灰度值,灰度信息的测量算子M为:The grayscale value from the quantum state is recovered using projection measurement. The measurement operator M of the grayscale information is:
; ;
其中,和都表示存储图像的位置信息。in, and Both represent the location information of the stored image.
上述的一种基于量子深度学习的农田害虫检测方法,所述步骤6中利用像素灰度关联进行图片边缘检测具体包括:In the above-mentioned method for detecting farmland pests based on quantum deep learning, the step 6 of using pixel grayscale association to perform image edge detection specifically includes:
步骤6.1,将图像转化成为量子图像:Step 6.1, convert the image into a quantum image:
灰度处理后的数字图像表示在点处的灰度值,,;若表示像素点处灰度值为1的概率,为灰度值取0的概率,则图像可用量子比特表示为:Digital image after grayscale processing Indicates at point The gray value at , ;like Represents pixel The probability that the gray value is 1, is the probability that the gray value is 0, then the image The available quantum bits can be represented as:
; ;
步骤6.2,描述邻域像素灰度的跃变情况:图像中、和相邻3个像素点的灰度值分别为、和,根据量子态叠加原理,它们构成一个邻域灰度关联系统,表示为:Step 6.2, describe the grayscale transition of the neighboring pixels: , and The gray values of three adjacent pixels are , and , according to the principle of quantum state superposition, they constitute a neighborhood grayscale correlation system, expressed as:
; ;
图像量子关联系统中,态矢由三位二进制组成,从左到右出现01跳变的态矢为正走向态矢, 而出现10跳变的态矢为负走向态矢;In the image quantum correlation system, the state vector is composed of three binary bits. From left to right, the state vector with a jump of 01 is a positive state vector, and the state vector with a jump of 10 is a negative state vector.
根据正、负走向态矢的概率与其对应反态矢的概率进行差运算,则有According to the difference operation between the probability of positive and negative state vectors and the probability of their corresponding anti-state vectors, we have
; ;
其中,为叠加态矢前的系数分别对应的概率幅;in, is the superposition vector The probability amplitudes corresponding to the previous coefficients respectively;
步骤6.3,利用平均式构建边缘检测模板:Step 6.3, construct edge detection template using average formula:
通过水平和垂直方向上的检测分量之和描述图像灰度跃变情况,即边缘灰度:;The grayscale transition of the image is described by the sum of the detection components in the horizontal and vertical directions, that is, the edge grayscale: ;
其中,表示图像跃变情况,即边缘灰度,表示水平方向上的检测分量,表示垂直方向上的检测分量;in, Indicates the image transition, that is, the edge grayscale, represents the detection component in the horizontal direction, Represents the detection component in the vertical direction;
步骤6.4,边缘检测设定阈值:Step 6.4, edge detection threshold setting:
以边缘灰度的三倍均值作为边缘检测阈值,即:The triple mean of the edge grayscale is used as the edge detection threshold, that is:
; ;
其中,表示边缘检测的阈值,代表边缘灰度均值;in, represents the threshold for edge detection, Represents the edge grayscale mean;
提取二值边缘图像函数;假如边缘灰度均值大于边缘检测的阈值,二值边缘图像函数为0,否则二值边缘图像边缘函数为1。Extract binary edge image function ; If the edge grayscale mean is greater than the edge detection threshold, the binary edge image function is 0, otherwise the binary edge image edge function is 1.
本发明的有益效果是:(1)量子算法对于图像的优化:基于卷积神经网络,将NEQR量子图像模型应用于病虫害检测图像的构建优化之中,同时以量子阈值去噪算法有效地去除图像中的噪声,提高图像的清晰度和质量,并以灰色关联算法进行图像边缘检测,对图像进行平滑处理,使图像的精准度更高。The beneficial effects of the present invention are: (1) Quantum algorithm optimization for images: Based on convolutional neural networks, the NEQR quantum image model is applied to the construction and optimization of pest and disease detection images. At the same time, the quantum threshold denoising algorithm is used to effectively remove noise in the image, improve the clarity and quality of the image, and the gray correlation algorithm is used to detect image edges and smooth the image to make the image more accurate.
(2)量子算法对图像处理效率的提升。利用量子算法对像素灰度关联进行边缘检测,可以提高对图像边缘的识别精度,从而更准确地捕捉到图像中的边缘信息。利用卷积神经网络来检测识别图像,可以大大加快图像处理的速度,同时借助YOLOv5算法弥补深度学习技术在图像处理方面的缺点,增加运算精度,提高运算效率,并利用NEQR模型来弥补卷积神经网络的不足。(2) Quantum algorithms improve image processing efficiency. Using quantum algorithms to detect edges based on pixel grayscale associations can improve the accuracy of image edge recognition, thereby more accurately capturing edge information in the image. Using convolutional neural networks to detect and recognize images can greatly speed up image processing. At the same time, the YOLOv5 algorithm can be used to make up for the shortcomings of deep learning technology in image processing, increase computational accuracy, improve computational efficiency, and use the NEQR model to make up for the shortcomings of convolutional neural networks.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
下面结合附图和实施例对本发明进一步说明。The present invention is further described below in conjunction with the accompanying drawings and embodiments.
图1为本发明技术路线示意图;FIG1 is a schematic diagram of the technical route of the present invention;
图2为本发明实施例一幅的NEQR图像及量子态示意图;FIG. 2 is a diagram of an embodiment of the present invention. NEQR image and quantum state diagram;
图3为本发明实施例中分类识别害虫图像流程图;FIG3 is a flow chart of classifying and identifying pest images in an embodiment of the present invention;
图4为本发明实施例YOLOv5模型网络架构图。FIG4 is a diagram of a YOLOv5 model network architecture according to an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
为使本领域技术人员更好的理解本发明的技术方案,下面结合附图和具体实施方式对本发明作详细说明。In order to enable those skilled in the art to better understand the technical solution of the present invention, the present invention is described in detail below in conjunction with the accompanying drawings and specific implementation methods.
如图1所示,本实施例公开了一种基于量子深度学习的农田害虫检测方法,本实施例应用新型增强量子表示 (NEQR) 量子图像对病虫害检测图像进行优化,借助目标检测算法 (YOLOv5) 提高图像的精准度。由于每个图像像素的颜色和位置都是用经典数字表示的,因此本实施例使用新型增强量子表示 (NEQR) 编码方法,使用量子态及其系数来表示图像中的像素位置和颜色信息。As shown in Figure 1, this embodiment discloses a method for detecting farmland pests based on quantum deep learning. This embodiment applies a novel enhanced quantum representation (NEQR) quantum image to optimize the pest detection image, and uses a target detection algorithm (YOLOv5) to improve the accuracy of the image. Since the color and position of each image pixel are represented by classical numbers, this embodiment uses a novel enhanced quantum representation (NEQR) encoding method to use quantum states and their coefficients to represent the pixel position and color information in the image.
农田害虫图像数据集的主要目标是协助研究者快速、准确地识别农田中的害虫种类数量。机器学习算法通过对该数据集进行训练和测试,可自动识别和分类农田害虫图像。本实施例构建的农田害虫数据集分别选取10幅正常农田作物图像以及20幅害虫农田图像。The main goal of the farmland pest image dataset is to assist researchers in quickly and accurately identifying the number of pest species in farmland. The machine learning algorithm can automatically identify and classify farmland pest images by training and testing the dataset. The farmland pest dataset constructed in this embodiment selects 10 normal farmland crop images and 20 pest farmland images.
本实施例具体实现方法如图1所示,具体包括:步骤1,采集被检测的害虫图像,利用新型增强量子表示模型将采集的被检测害虫图像转化为量子图像。The specific implementation method of this embodiment is shown in FIG1 , and specifically comprises: Step 1, collecting images of detected pests, and converting the collected images of detected pests into quantum images using a novel enhanced quantum representation model.
为了确定害虫的具体特征值,需要存储其位置信息和灰度信息,而量子灰度图像的NEQR表达式为:In order to determine the specific characteristic values of pests, it is necessary to store their position information and grayscale information, and the NEQR expression of the quantum grayscale image is:
; ;
其中,表示灰度图像;表示可用于编码的二进制灰度信息;n代表量子比特位的长度;表示量子计算中的张量积计算。M是灰度值的十进制表示,在扩展到量子图像系统中时,需要变换为二进制的量子比特序列。值得注意的是,表示灰度信息的量子比特数量和所制备图像的灰度动态范围有关。j是位置信息的十进制表示,同样地,需要将图像的位置信息扩展为二进制的量子比特序列。其中n是代表量子比特位的长度,而in, Represents a grayscale image; Represents binary grayscale information that can be used for encoding; n represents the length of the quantum bit; Represents the tensor product calculation in quantum computing. M is the decimal representation of the grayscale value. When extended to the quantum image system, it needs to be transformed into a binary quantum bit sequence. It is worth noting that the number of quantum bits representing the grayscale information is related to the grayscale dynamic range of the prepared image. j is the decimal representation of the position information. Similarly, the position information of the image needs to be expanded into a binary quantum bit sequence. Where n represents the length of the quantum bit, and
; ;
代表的是量子灰度图像系统中的所有的基态;是量子计算中的张量积符号,是实现量子逻辑门之间操作的重要运算方式。It represents all the ground states in the quantum grayscale image system; It is the tensor product symbol in quantum computing and an important operation method for realizing operations between quantum logic gates.
步骤2,对步骤1所得的害虫图像进行预处理,并将图像位置信息存储于量子系统中。Step 2: pre-process the pest image obtained in step 1 and store the image position information in the quantum system.
害虫图像数据经过标定后,需要利用阈值去噪算法对数据进行去噪,然后对害虫图像进行增强处理,目的是为了让害虫的特征值更加明显,从而便于识别图像并进行特征采集。可以通过诸如增强对比度等方式来增强害虫特征的显示效果。在数字图像处理中,每一个像素都包含位置信息和灰度信息,可以根据害虫图像中的位置信息达到访问像素内容的目的。因此在量子图像中,为了将位置信息存储于量子系统中,需要将位置信息以二进制的形式拆分为量子序列的表达方式,即After the pest image data is calibrated, it is necessary to use a threshold denoising algorithm to denoise the data, and then enhance the pest image in order to make the characteristic values of the pest more obvious, so as to facilitate image recognition and feature collection. The display effect of pest features can be enhanced by methods such as contrast enhancement. In digital image processing, each pixel contains position information and grayscale information, and the pixel content can be accessed based on the position information in the pest image. Therefore, in quantum images, in order to store the position information in the quantum system, the position information needs to be split into a quantum sequence expression in binary form, that is,
; ;
式中,表示y方向的信息;表示x方向的信息。In the formula, Represents information in the y direction; Represents information in the x direction.
由上文可知,一个大小的灰度图像,四个像素的灰度值分别为0、100、200、255,其对应的NEQR表达式可表示为:From the above, we can see that a For a grayscale image of size 1, the grayscale values of the four pixels are 0, 100, 200, and 255 respectively. The corresponding NEQR expression can be expressed as:
; ;
式中,M0, M1, M2, M3以二进制序列的方式表示,代表了量子图像中的灰度信息,并且每个像素位置所对应的灰度信息是唯一映射的关系,即一个位置信息对应唯一确定的灰度信息,如图2所示。In the formula, M 0 , M 1 , M 2 , and M 3 are expressed in binary sequence, representing the grayscale information in the quantum image, and the grayscale information corresponding to each pixel position is a unique mapping relationship, that is, one position information corresponds to a uniquely determined grayscale information, as shown in FIG2 .
灰度图像只含亮度信息,不含色彩信息,其亮度由暗到明,亮度变化是连续的。灰度是把白色与黑色之间按对数关系分成的若干等级,它的变化范围是0-255。和原彩色图像相比,灰度图不含色彩信息,故灰度化之后的图像所含信息量大大减少,图像处理计算量也相应大幅减少,方便后续计算。Grayscale images only contain brightness information, not color information, and their brightness changes continuously from dark to bright. Grayscale is a logarithmic division of white and black into several levels, and its range is 0-255. Compared with the original color image, the grayscale image does not contain color information, so the amount of information contained in the grayscale image is greatly reduced, and the amount of image processing calculation is also greatly reduced, which is convenient for subsequent calculations.
步骤3,搭建改进的深度卷积神经网络对害虫图像进行分类识别,若属于害虫,则将害虫图像输入图像分割网络中,重建害虫图像,如图3所示。Step 3: Build an improved deep convolutional neural network to classify and identify pest images. If they are pests, input the pest image into the image segmentation network to reconstruct the pest image, as shown in Figure 3.
使用使用公开数据集——ImageNet图像数据集来训练深度卷积神经网络模型,通过量子算法不断调整模型参数。在训练过程中,模型会逐渐学习到图像的特征,并将其转换为特征向量。这些特征向量可以用于后续的图像处理和应用。The deep convolutional neural network model is trained using the public dataset, ImageNet, and the model parameters are continuously adjusted through quantum algorithms. During the training process, the model gradually learns the features of the image and converts them into feature vectors. These feature vectors can be used for subsequent image processing and applications.
本实施例中的深度卷积神经网络模型为优化的YOLOv5模型。The deep convolutional neural network model in this embodiment is an optimized YOLOv5 model.
YOLOv5算法相较于其他方法,在检测准确性和速度方面有所提升,引入了新的设计思路和策略,该算法采用单阶段检测,能够一次性对整个图像进行目标检测,实现实时的检测任务。采用无锚框的设计,直接预测边界框的位置和大小,提高了模型的自适应性和泛化能力。Compared with other methods, the YOLOv5 algorithm has improved detection accuracy and speed, and introduced new design ideas and strategies. The algorithm adopts a single-stage detection, which can detect targets in the entire image at one time and realize real-time detection tasks. The design without anchor boxes directly predicts the position and size of the bounding box, which improves the adaptability and generalization ability of the model.
优化的YOLOv5模型网络架构图如图4所示,目标检测算法 (YOLOv5) 算法可分为输入端、支持层、融合特征层和输出层。目标检测算法 (YOLOv5) 的输入端采用马赛克(Mosaic) 算法来实现数据增强的设计,目的是丰富数据集,在一定程度上减少训练时长。在支持层,利用聚焦结构切割输入图像,把图像切割成四份,这四份中每一份图像的数据等价于两倍下采样下获得,通过纵向通道拼接后接着做卷 积运算。在融合特征层,路径聚合网络 (PANet) 负责特征融合的工作,它所做的是在特征提取器 (FPN) 基础上加入了信息的流通路,在一定程度上缩短了信息传输路径。The optimized YOLOv5 model network architecture is shown in Figure 4. The target detection algorithm (YOLOv5) can be divided into the input end, support layer, fusion feature layer and output layer. The input end of the target detection algorithm (YOLOv5) uses the Mosaic algorithm to implement the design of data enhancement, the purpose is to enrich the data set and reduce the training time to a certain extent. In the support layer, the input image is cut into four parts using the focus structure. The data of each of the four parts is equivalent to that obtained under two times downsampling, and then the convolution operation is performed after the vertical channel splicing. In the fusion feature layer, the path aggregation network (PANet) is responsible for feature fusion. What it does is to add the flow path of information on the basis of the feature extractor (FPN), which shortens the information transmission path to a certain extent.
在使用YOLOv5模型进行害虫识别与检测前,先对YOLOv5模型进行训练。Before using the YOLOv5 model for pest identification and detection, the YOLOv5 model is trained first.
利用ImageNet图像数据集来训练YOLOv5模型,将数据集划分为训练集、验证集和测试集,有助于评估模型的性能并进行超参数调整。训练集用于模型参数的更新和优化,验证集用于监测模型的性能和选择最佳的模型配置,测试集用于评估最终训练好的模型在未见过的数据上的表现。为了实现对农田害虫的准确检测与识别,以预处理后的安全行为数据为基础,使用基于YOLOv5算法的多目标检测模型进行模型优化。The ImageNet image dataset is used to train the YOLOv5 model. The dataset is divided into training set, validation set, and test set, which helps to evaluate the performance of the model and adjust the hyperparameters. The training set is used to update and optimize the model parameters, the validation set is used to monitor the performance of the model and select the best model configuration, and the test set is used to evaluate the performance of the final trained model on unseen data. In order to achieve accurate detection and identification of farmland pests, the multi-target detection model based on the YOLOv5 algorithm is used for model optimization based on the preprocessed safety behavior data.
为了获得更好的模型性能,采用随机梯度下降(SGD)算法来更新模型参数,逐步寻找损失函数的最小值,从而提高模型的准确性和效果。模型优化公式表达如下:In order to obtain better model performance, the stochastic gradient descent (SGD) algorithm is used to update the model parameters and gradually find the minimum value of the loss function, thereby improving the accuracy and effect of the model. The model optimization formula is expressed as follows:
; ;
其中, y表示标签的真实值,表示模型的预测值,表示目标位置坐标的真实值,表示模型对目标位置的预测值;和分别表示分类损失和定位损失,为损失项的权重。Among them, y represents the true value of the label, represents the predicted value of the model, Represents the true value of the target position coordinates, Represents the model's predicted value for the target location; and Represent classification loss and positioning loss respectively, is the weight of the loss term.
参数更新公式(采用SGD算法)如下:;The parameter update formula (using the SGD algorithm) is as follows: ;
其中, 表示模型参数, 表示学习率,表示损失函数对参数的梯度。在训练和优化YOLOv5模型时需综合考虑数据集、模型配置、训练设置、训练过程、超参数调整、模型评估、模型优化及应用等多个方面。通过不断尝试和调整,获得更好的模型性能和实际应用效果。in, represents the model parameters, represents the learning rate, It represents the gradient of the loss function with respect to the parameters. When training and optimizing the YOLOv5 model, it is necessary to comprehensively consider the data set, model configuration, training settings, training process, hyperparameter adjustment, model evaluation, model optimization and application. Through continuous trial and adjustment, better model performance and practical application effects can be obtained.
将步骤2经过预处理所得的图像作为优化的YOLOv5模型的输入,对每一帧的输入图像,将其表示为x;The image obtained after preprocessing in step 2 is used as the input of the optimized YOLOv5 model. For each frame of the input image, it is represented as x;
优化的YOLOv5模型的输出包括目标害虫位置的坐标信息和类别概率;对于每个输入图像x, 得到的输出为:;The output of the optimized YOLOv5 model includes the coordinate information of the target pest location and the category probability; for each input image x, the output is: ;
其中, m表示在该图像中检测到的目标数量,表示第i个目标的预测结果;每个目标预测结果包括目标位置的边界框坐标及对应的类别概率,其目标位置坐标公式:Among them, m represents the number of targets detected in the image. Represents the prediction result of the i-th target; each target prediction result Bounding box coordinates including the target location And the corresponding class probability , the target position coordinate formula is:
; ;
其中,表示边界框的左上角坐标,、分别表示边界框的宽度和高度;类别概率公式如下:in, represents the coordinates of the upper left corner of the bounding box, , Represent the width and height of the bounding box respectively; the category probability formula is as follows:
; ;
其中,c表示目标害虫类别的数量,表示第i个目标属于第j个类别的概率。为识别害虫的重量,需根据目标类别概率、结合预定义的害虫种类标签,确定每个目标所属的种类,采取相应的管理措施。需要注意的是,实现害虫种类识别需要具备一定的计算机视觉和深度学习基础知识,了解YOLOv5等目标检测算法的原理及应用。在实际应用中考虑模型的泛化能力、实时性等问题,根据实际情况进行优化和调整。Where c represents the number of target pest categories, Indicates the probability that the i-th target belongs to the j-th category. To identify the weight of pests, it is necessary to determine the type of each target based on the target category probability and the predefined pest type label, and take corresponding management measures. It should be noted that to achieve pest type identification, it is necessary to have certain basic knowledge of computer vision and deep learning, and understand the principles and applications of target detection algorithms such as YOLOv5. In practical applications, the generalization ability and real-time performance of the model should be considered, and optimization and adjustment should be carried out according to the actual situation.
步骤4,将步骤3所得的图像利用新型增强量子表示模型进行优化。Step 4: Optimize the image obtained in step 3 using the new enhanced quantum representation model.
步骤4中具体包括:Step 4 specifically includes:
步骤4.1,将数字图像转化为量子图像:图像的尺寸为图像的尺寸为, 灰度范围为,则NEQR模型表示如下:Step 4.1, convert the digital image into a quantum image: the image size is , the grayscale range is , then the NEQR model is expressed as follows:
; ;
式中,为图像量子态;In the formula, is the image quantum state;
; ;
图像包括两部分,即和分别表示存储图像的灰度信息和位置信息;位置信息由垂直方向坐标和水平方向坐标组成,具体表达式为:image It consists of two parts, namely and Respectively represent the grayscale information and position information of the stored image; position information It consists of vertical coordinates and horizontal coordinates, and the specific expression is:
; ;
其中,前n个量子比特存储图像的垂直坐标信息,后n个量子比特存储图像的水平坐标信息;Among them, the first n quantum bits store the vertical coordinate information of the image, and the last n quantum bits store the horizontal coordinate information of the image;
二值图像的NEQR模型表示形式为:The NEQR model representation of a binary image is:
; ;
式中,表示存储图像的二值信息;表示存储图像的位置信息;In the formula, Indicates the binary information of the stored image; Indicates the location information of the stored image;
步骤4.2,采用量子逻辑门对存储状态进行转化:量子系统的初始状态为:Step 4.2, use quantum logic gates to transform the storage state: the initial state of the quantum system is:
; ;
应用H门变换,对每一位代表位置信息的量子比特分别进行H变换,可产生拥有个状态的叠加态系统,即:Applying H-gate transformation, each qubit representing position information is transformed by H-gate transformation, which can generate The superposition system of states is:
; ;
按照实际图像的像素信息,根据坐标与灰度信息的映射关系,将量子图像中代表灰度信息的初始态序列转换为所对应的M态,实现量子图像中位置信息与灰度信息唯一映射的过程,具体表达式为:According to the pixel information of the actual image and the mapping relationship between coordinates and grayscale information, the initial state sequence representing the grayscale information in the quantum image is converted into the corresponding M state, realizing the unique mapping process between the position information and the grayscale information in the quantum image. The specific expression is:
; ;
其中,n表示量子比特位的长度;代表的是量子灰度图像系统中所有的基态;表示量子计算中的张量积计算;表示可用于编码的二进制灰度信息;表示灰度图像;Where n represents the length of the quantum bit; It represents all the ground states in the quantum grayscale image system; Represents tensor product calculation in quantum computing; Represents binary grayscale information that can be used for encoding; Represents a grayscale image;
步骤4.3,对图像量子态进行量子测量:位置信息测量算子为:Step 4.3, perform quantum measurement on the image quantum state: position information measurement operator for:
; ;
式中,表示q个单位矩阵的直积;In the formula, represents the direct product of q identity matrices;
使用投影测量恢复来自量子态的灰度值,灰度信息的测量算子M为:The grayscale value from the quantum state is recovered using projection measurement. The measurement operator M of the grayscale information is:
; ;
其中,和都表示存储图像的位置信息。in, and Both represent the location information of the stored image.
步骤5,提取害虫图像的数据特征,并将具有相同数据特征的害虫进行分类,建立农田害虫分类数据库。Step 5: extract the data features of the pest images, classify the pests with the same data features, and establish a farmland pest classification database.
表1 害虫危害程度Table 1 Degree of pest damage
根据昆虫对农田的益处与害处可将农田中的大部分昆虫分为益虫与害虫,通过NEQR模型对图像进行处理后,根据昆虫形状,颜色,纹理能将昆虫进行筛选,并将害虫进行分类。Most insects in farmland can be divided into beneficial insects and pests according to the benefits and harms of insects to farmland. After processing the image with the NEQR model, insects can be screened and pests can be classified according to their shape, color and texture.
益虫:农田中的益虫主要有捕食性天然昆虫、寄生性天敌昆虫、昆虫病原微生物、授粉昆虫。Beneficial insects: Beneficial insects in farmland mainly include predatory natural insects, parasitic natural enemy insects, insect pathogenic microorganisms, and pollinating insects.
害虫:植株栽培过程中难免会受到害虫侵染,如若不能及时对病虫进行防治,对于农业生产将会有极大的危害,基于NEQR模型处理图像后可筛选出的常见害虫,选取了小地老虎、甘蓝夜蛾、金针虫、玉米螟、黏虫、麦秆蝇、高粱蚜、草地螟进行研究。Pests: Plants will inevitably be infected by pests during cultivation. If the pests and diseases are not prevented and controlled in time, it will cause great harm to agricultural production. Common pests that can be screened out after processing images based on the NEQR model include small cutworms, cabbage armyworms, wireworms, corn borers, armyworms, wheat stalk flies, sorghum aphids, and grass borers for research.
根据研究选取的几种常见害虫对于农田的危害程度主要分为五种等级——轻、中轻、中、中重、重。其中选取的农田作物不同,害虫种类不同,对于农田的危害程度阈值亦不同,例如,如表1所示,玉米农田中的小地老虎危害程度分别为轻,中轻,中,中重,重时所对应的小地老虎数量为<2只,2-6只,6-11只,11-16只,>16只(百株玉米中害虫数量)。According to the research, the degree of damage to farmland by several common pests selected is mainly divided into five levels: light, medium light, medium, medium severe, and severe. The selected farmland crops are different, the pest types are different, and the threshold of the degree of damage to farmland is also different. For example, as shown in Table 1, the degree of damage of small cutworms in corn farmland is light, medium light, medium, medium severe, and the corresponding number of small cutworms is <2, 2-6, 6-11, 11-16, >16 (the number of pests per 100 corn plants).
由此可见,有害虫的农田是绝对的,病虫害的程度是相对的,各类害虫影响的农作物发病率程度是不同的。防治病虫害基本在中低程度预防比较有效,例如当黏虫150头/平方米,麦秆蝇1-3/200网,高粱蚜有蚜率30%-50%百株,小地老虎2-6头/百株,草地螟每平30-70头时预防有效。It can be seen that the farmland with pests is absolute, the degree of pests is relative, and the incidence of crops affected by various pests is different. Prevention of pests and diseases is more effective at a medium or low level. For example, when the number of armyworms is 150 per square meter, the number of wheat stalk flies is 1-3 per 200 nets, the aphid rate of sorghum aphids is 30%-50% per 100 plants, the number of cutworms is 2-6 per 100 plants, and the number of grass borers is 30-70 per square meter, prevention is effective.
通过网络搜索与实地考察整理出10种农田害虫图像, 共2386个样本, 并作了标注。采用对原始图像进行随机翻转、随机对比度调整等数据增强操作, 达到增强数据多样性的效果,进而达到对收集的样本集进行扩充的目的,提升了目标检测模型的鲁棒性,更加适应真实数据,完成农田害虫数据集制作,制作的该数据集可被用于和农田害虫检测相关的研究领域中。Through online search and field investigation, 10 kinds of farmland pest images, a total of 2386 samples, were sorted out and annotated. Data enhancement operations such as random flipping and random contrast adjustment of the original images were adopted to enhance the data diversity, thereby achieving the purpose of expanding the collected sample set, improving the robustness of the target detection model, and being more adaptable to real data. The farmland pest dataset was completed, and the dataset produced can be used in research fields related to farmland pest detection.
步骤6,采集有害虫农田的图像,利用像素灰度关联进行图片边缘检测,根据步骤5所得的农田害虫分类数据库,确定农田每一区域害虫的种类和数量,集中喷洒相对应的农药。Step 6: Collect images of farmland with pests, use pixel grayscale association to detect image edges, determine the types and numbers of pests in each area of the farmland based on the farmland pest classification database obtained in step 5, and spray corresponding pesticides in a centralized manner.
步骤6中利用像素灰度关联进行图片边缘检测具体包括:In step 6, the image edge detection using pixel grayscale association specifically includes:
步骤6.1,将图像转化成为量子图像:Step 6.1, convert the image into a quantum image:
灰度处理后的数字图像表示在点处的灰度值,,;若表示像素点处灰度值为1的概率,为灰度值取0的概率,则图像可用量子比特表示为:Digital image after grayscale processing Indicates at point The gray value at , ;like Represents pixel The probability that the gray value is 1, is the probability that the gray value is 0, then the image The available quantum bits can be represented as:
; ;
步骤6.2,描述邻域像素灰度的跃变情况:图像中、和相邻3个像素点的灰度值分别为、和,根据量子态叠加原理,它们构成一个邻域灰度关联系统,表示为:Step 6.2, describe the grayscale transition of the neighboring pixels: , and The gray values of three adjacent pixels are , and , according to the principle of quantum state superposition, they constitute a neighborhood grayscale correlation system, expressed as:
; ;
图像量子关联系统中,态矢由三位二进制组成,从左到右出现01跳变的态矢为正走向态矢, 而出现10跳变的态矢为负走向态矢;In the image quantum correlation system, the state vector is composed of three binary bits. From left to right, the state vector with a jump of 01 is a positive state vector, and the state vector with a jump of 10 is a negative state vector.
根据正、负走向态矢的概率与其对应反态矢的概率进行差运算,则有According to the difference operation between the probability of positive and negative state vectors and the probability of their corresponding anti-state vectors, we have
; ;
其中,为叠加态矢前的系数分别对应的概率幅;in, is the superposition vector The probability amplitudes corresponding to the previous coefficients respectively;
步骤6.3,利用平均式构建边缘检测模板:Step 6.3, construct edge detection template using average formula:
通过水平和垂直方向上的检测分量之和描述图像灰度跃变情况,即边缘灰度:;The grayscale transition of the image is described by the sum of the detection components in the horizontal and vertical directions, that is, the edge grayscale: ;
其中,表示图像跃变情况,即边缘灰度,表示水平方向上的检测分量,表示垂直方向上的检测分量;in, Indicates the image transition, that is, the edge grayscale, represents the detection component in the horizontal direction, Represents the detection component in the vertical direction;
步骤6.4,边缘检测设定阈值:Step 6.4, edge detection threshold setting:
以边缘灰度的三倍均值作为边缘检测阈值,即:The triple mean of the edge grayscale is used as the edge detection threshold, that is:
; ;
其中,表示边缘检测的阈值,代表边缘灰度均值;in, represents the threshold for edge detection, Represents the edge grayscale mean;
提取二值边缘图像函数;假如边缘灰度均值大于边缘检测的阈值,二值边缘图像函数为0,否则二值边缘图像边缘函数为1。Extract binary edge image function ; If the edge grayscale mean is greater than the edge detection threshold, the binary edge image function is 0, otherwise the binary edge image edge function is 1.
根据得出的阈值数据,对害虫图像的体型,特征,颜色等数据特征进行提取,建立农田害虫分类数据库,同时分析不同种类的害虫对农田的危害程度,并对其进行等级的划分。并建立害虫区域标定来确定每一区域害虫的种类和数量,集中喷洒相对应的农药,达到更加精准减少农田害虫的目的,提高农作物产量。According to the obtained threshold data, the body shape, characteristics, color and other data features of the pest image are extracted to establish a farmland pest classification database, and the degree of damage caused by different types of pests to farmland is analyzed and classified. In addition, a pest area calibration is established to determine the type and number of pests in each area, and the corresponding pesticides are sprayed in a centralized manner to achieve the purpose of more accurately reducing farmland pests and increasing crop yields.
与传统的训练模型相比,本实施例将全类平均准确率提高了8.1%,召回率提高了12.8%。由于量子改进的深度学习目标检测算法(YOLOv5)模型增强了模型对害虫区域信息的提取和传递能力,其次通过优化损失函数提高了模型对复杂环境中害虫样本的关注程度。因此,基于量子改进的深度学习目标检测算法(YOLOv5)在复杂农田环境下有效提取到了重要害虫特征,取得了较好的检测效果。量子改进的深度学习目标检测算法(YOLOv5)加强了深层中害虫特征表示,提高了多尺度害虫检测精度,以少量参数实现了较为准确的检测,可以准确定位到害虫,使得覆盖害虫区域更紧密,定位精度更高。Compared with the traditional training model, this embodiment improves the average accuracy of the whole class by 8.1% and the recall rate by 12.8%. Since the quantum-improved deep learning target detection algorithm (YOLOv5) model enhances the model's ability to extract and transmit pest area information, and secondly, by optimizing the loss function, the model's attention to pest samples in complex environments is improved. Therefore, the quantum-improved deep learning target detection algorithm (YOLOv5) effectively extracts important pest features in complex farmland environments and achieves good detection results. The quantum-improved deep learning target detection algorithm (YOLOv5) strengthens the representation of pest features in the deep layer, improves the accuracy of multi-scale pest detection, and achieves more accurate detection with a small number of parameters. Pests can be accurately located, making the coverage of pest areas tighter and the positioning accuracy higher.
以上实施例仅为本发明的示例性实施例,不用于限制本发明。本领域技术人员可以在本发明的实质和保护范围内,对本发明做出各种修改或等同替换,这种修改或等同替换也应视为落在本发明的保护范围内。The above embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention. Those skilled in the art may make various modifications or equivalent substitutions to the present invention within the essence and protection scope of the present invention, and such modifications or equivalent substitutions shall also be deemed to fall within the protection scope of the present invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410584757.3A CN118172676B (en) | 2024-05-13 | 2024-05-13 | A method for detecting farmland pests based on quantum deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410584757.3A CN118172676B (en) | 2024-05-13 | 2024-05-13 | A method for detecting farmland pests based on quantum deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118172676A CN118172676A (en) | 2024-06-11 |
CN118172676B true CN118172676B (en) | 2024-08-13 |
Family
ID=91359029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410584757.3A Active CN118172676B (en) | 2024-05-13 | 2024-05-13 | A method for detecting farmland pests based on quantum deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118172676B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118967676A (en) * | 2024-10-16 | 2024-11-15 | 山东宇洋汽车尾气净化装置有限公司 | A defect detection method for automobile exhaust device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152537A (en) * | 2021-11-15 | 2023-05-23 | 长春工业大学 | 15 forestry pest identification algorithms based on Yolov5s |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740549B (en) * | 2019-01-08 | 2022-12-27 | 西安电子科技大学 | SAR image target detection system and method based on semi-supervised CNN |
CN117649610B (en) * | 2024-01-30 | 2024-05-28 | 江西农业大学 | A pest detection method and system based on YOLOv5 |
-
2024
- 2024-05-13 CN CN202410584757.3A patent/CN118172676B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152537A (en) * | 2021-11-15 | 2023-05-23 | 长春工业大学 | 15 forestry pest identification algorithms based on Yolov5s |
Non-Patent Citations (4)
Title |
---|
基于像素灰度关联的边缘检测;许悟生,谢可夫;湖南师范大学自然科学学报;20120831;正文1-5页 * |
基于阈值的量子图像分割算法研究;高胜威;重庆邮电大学硕士学位论文;20230615;正文9-16页 * |
量子图像二值形态学边缘检测算法研究;张先光;江西理工大学硕士论文;20230115;正文2-12页 * |
量子图像的边缘检测方案设计研究;鲍 华 良;东北石油大学硕士论文;20240115;正文15-17页 * |
Also Published As
Publication number | Publication date |
---|---|
CN118172676A (en) | 2024-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Apple leaf disease identification and classification using resnet models | |
Jia et al. | Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot | |
Chen et al. | Weed detection in sesame fields using a YOLO model with an enhanced attention mechanism and feature fusion | |
CN110148120B (en) | Intelligent disease identification method and system based on CNN and transfer learning | |
JP6935377B2 (en) | Systems and methods for automatic inference of changes in spatiotemporal images | |
CN111598001B (en) | Identification method for apple tree diseases and insect pests based on image processing | |
Kumari et al. | Hybridized approach of image segmentation in classification of fruit mango using BPNN and discriminant analyzer | |
Sabrol et al. | Fuzzy and neural network based tomato plant disease classification using natural outdoor images | |
CN107067043A (en) | A kind of diseases and pests of agronomic crop detection method | |
Zhao et al. | A detection method for tomato fruit common physiological diseases based on YOLOv2 | |
de Silva et al. | Towards agricultural autonomy: crop row detection under varying field conditions using deep learning | |
Ferrer-Ferrer et al. | Simultaneous fruit detection and size estimation using multitask deep neural networks | |
CN118172676B (en) | A method for detecting farmland pests based on quantum deep learning | |
Kumar et al. | Plant disease detection and crop recommendation using CNN and machine learning | |
CN116630960A (en) | Corn disease identification method based on texture-color multi-scale residual shrinkage network | |
Ambashtha et al. | Leaf disease detection in crops based on single-hidden layer feed-forward neural network and hierarchal temporary memory | |
CN114758132B (en) | A method and system for identifying fruit tree diseases and pests based on convolutional neural network | |
Zhang et al. | A multi-species pest recognition and counting method based on a density map in the greenhouse | |
CN111986149A (en) | A method for detecting plant diseases and insect pests based on convolutional neural network | |
Balakrishna et al. | Tomato leaf disease detection using deep learning: A CNN approach | |
Mudgil et al. | Identification of tomato plant diseases using CNN-A comparative review | |
Jin et al. | An improved mask R-CNN method for weed segmentation | |
CN117152609A (en) | A crop appearance feature detection system | |
Zhang et al. | Unsound wheat kernel recognition based on deep convolutional neural network transfer learning and feature fusion | |
Mahilraj et al. | Detection of tomato leaf diseases using attention embedded hyper-parameter learning optimization in cnn |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |