[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110084145B - TensorFlow-based pest and disease damage time-frequency domain multi-scale identification system and operation method - Google Patents

TensorFlow-based pest and disease damage time-frequency domain multi-scale identification system and operation method Download PDF

Info

Publication number
CN110084145B
CN110084145B CN201910277867.4A CN201910277867A CN110084145B CN 110084145 B CN110084145 B CN 110084145B CN 201910277867 A CN201910277867 A CN 201910277867A CN 110084145 B CN110084145 B CN 110084145B
Authority
CN
China
Prior art keywords
image
value
neural network
layer
crop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910277867.4A
Other languages
Chinese (zh)
Other versions
CN110084145A (en
Inventor
刘明堂
李亚萍
袁胜
夏振伟
陈健
郑海颖
秦泽宁
吴思琪
陆桂明
孟庆云
吴勤
毕莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Water Resources and Electric Power
Original Assignee
North China University of Water Resources and Electric Power
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Water Resources and Electric Power filed Critical North China University of Water Resources and Electric Power
Priority to CN201910277867.4A priority Critical patent/CN110084145B/en
Publication of CN110084145A publication Critical patent/CN110084145A/en
Application granted granted Critical
Publication of CN110084145B publication Critical patent/CN110084145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Catching Or Destruction (AREA)

Abstract

The invention discloses a pest and disease damage time-frequency domain multi-scale identification system and an operation method based on a TensorFlow, wherein the system comprises an OV7725 camera sub-node acquisition unit, a LoRa transmission sub-node transmission unit, a soil entropy sensor sub-node acquisition unit, a stm32 node processing unit and other hardware platforms based on the Internet of things technology, and the system transmits acquired images and farmland information to a monitoring center; and (3) carrying out a data statistical analysis method of the basic TensorFlow in the monitoring center, and carrying out analysis and judgment on the crop image and farmland information to obtain accurate identification and judgment of plant diseases and insect pests.

Description

一种基于TensorFlow的病虫害时频域多尺度识别系统及操作方法A TensorFlow-based time-frequency domain multi-scale identification system and operation method for pests and diseases

技术领域technical field

本发明属于农业病虫害识别领域,具体涉及一种基于TensorFlow(基于数据流编程的符号数学系统)的病虫害时频域多尺度识别系统及操作方法。The invention belongs to the field of identification of agricultural diseases and insect pests, and in particular relates to a time-frequency domain multi-scale identification system and operation method of diseases and insect pests based on TensorFlow (a symbolic mathematics system based on data flow programming).

背景技术Background technique

农田农作物病虫害监测工作是实现农业信息化,进而实现农业智慧化的重要一步,整个工作过程设计众多的步骤和方法。这些年来农作物病虫害好像越来越严重,各种病症交叉不断。一方面是农药在种类和用量上的胡滥使用,农作物害虫逐渐产生抗药性;另一方面农产品价格一路下跌,农民急于增加收入,化肥的使用量年年增长,导致土壤成分发生变化,适合土生农作物害虫的生长。农作物病虫害越来越难治理,单纯认为是农药化肥的使用量导致是不准确的,需要通过农作物病虫害的叶片图像和各种土壤、空气数据进行综合分析,其中病虫害自动识别是其中一个重要问题,需要寻找适合农田农作物病虫害识别的计算方法。The monitoring of crop diseases and insect pests in farmland is an important step to realize agricultural informatization, and then realize agricultural intelligence. There are many steps and methods designed in the whole work process. In recent years, crop diseases and insect pests seem to be getting more and more serious, and various diseases continue to intersect. On the one hand, due to the indiscriminate use of pesticides in terms of types and dosages, crop pests gradually develop resistance; on the other hand, the price of agricultural products has been falling all the way, farmers are eager to increase their income, and the use of chemical fertilizers is increasing year by year, resulting in changes in soil composition. The growth of crop pests. Crop diseases and pests are becoming more and more difficult to control. It is inaccurate to simply think that it is caused by the amount of pesticides and fertilizers used. It is necessary to comprehensively analyze the leaf images of crop diseases and insect pests and various soil and air data. Among them, automatic identification of diseases and insect pests is one of the important issues. It is necessary to find a calculation method suitable for the identification of crop diseases and insect pests in the field.

农田农作物病虫害监测传统的方法一般有植保人员田间调查、田间取样等,但这些方法对农作物本身有破坏性,且耗时费力,存在以点代面的代表性差、主观性强、时效性差等弊端,难以满足大范围病虫害实时监测要求,无法全面推广应用。近几年来,农田农作物病虫害识别技术已经开展了带有现代科技技术的方法,比如贝叶斯判别法、核K—均值聚类法、色度矩法的应用概况。由于各种方法技术都有其自身应用条件和局限性,这些方法都不能自适应的调整图片大小及特征进行判断,有时在实践中若仅以一种参数作为解释的依据,可能难以取得预期的效果,因此,实践中一般需要采用一种随不同情况进行自适应调节的方法。The traditional methods of monitoring crop diseases and insect pests in farmland generally include field surveys by plant protection personnel, field sampling, etc., but these methods are destructive to the crops themselves, and are time-consuming and laborious. , it is difficult to meet the real-time monitoring requirements of large-scale pests and diseases, and it cannot be fully promoted and applied. In recent years, the identification technology of farmland crop diseases and insect pests has developed methods with modern technology, such as the application overview of Bayesian discriminant method, kernel K-means clustering method, and chromaticity moment method. Since various methods and technologies have their own application conditions and limitations, none of these methods can adaptively adjust the image size and features for judgment. Sometimes in practice, if only one parameter is used as the basis for interpretation, it may be difficult to obtain the expected results. Therefore, in practice, it is generally necessary to adopt a method of adaptive adjustment with different situations.

随着科学技术的发展,农田农作物病虫害识别技术都取得了长足的进步和发展,病虫害识别的准确度与速度不断提高。但各种识别方法都有其自身的应用前提和局限性,在实际应用中,目前大多数方法都只是将环境理想化,方法本身并没有创新,所以在农田农作物病虫害监测中传统的方法有时很难取得较好的效果。因此,在农田智慧化农作物病虫害监测的应用中,科技人员应根据现场介质物理性质与图像特征,选择合理的数据处理方法。With the development of science and technology, the identification technology of crop pests and diseases in farmland has made great progress and development, and the accuracy and speed of pest identification have been continuously improved. However, various identification methods have their own application prerequisites and limitations. In practical applications, most of the current methods only idealize the environment, and there is no innovation in the method itself. Difficult to achieve better results. Therefore, in the application of farmland intelligent crop disease and pest monitoring, scientific and technical personnel should choose a reasonable data processing method according to the physical properties of the on-site medium and image characteristics.

目前,农田农作物病虫害识别中尚无基于TensorFlow的专项图像和数据综合处理设计方案。在农田监测工程施工中,一般安装了以摄像头和数据传输模块为基础的农作物监测设备,可对采集到的农田信息、拍摄到的农作物图片以及对空气温湿度、农作物病菌孢子、病虫害数量和病虫害信息移位进行监测。然而,现有的图像和数据获取后,大多仅仅是人工进行判别,受主观因素影响极大,不能灵活地实现不同农田的病虫害识别,也不利于农业智能化的普及。Currently, there is no TensorFlow-based design scheme for integrated processing of image and data in field crop pest identification. In the construction of farmland monitoring projects, crop monitoring equipment based on cameras and data transmission modules is generally installed, which can analyze the collected farmland information, captured crop pictures, air temperature and humidity, crop pathogen spores, the number of pests and diseases Information shifts are monitored. However, after the existing images and data are acquired, most of them are only manually judged, which is greatly affected by subjective factors, and cannot flexibly realize the identification of diseases and insect pests in different farmlands, and is also not conducive to the popularization of agricultural intelligence.

发明内容Contents of the invention

农田农作物病虫害识别的方法包括人工识别、MATLAB识别、机器视觉识别等多种传输方法。不同类型的病虫害识别方法应用的场景和有效距离往往不同,其外在表现的感知技术也不相同,因此首先要研究农作物病虫害图像信息和农田信息。为了更加准确及时地对复杂环境中的农田农作物进行病虫害识别,需要协同感知周围环境或自身状态,对其进行自适应、时频域、多尺度的处理方法,从农田土壤和农作物图像中获取有效信息,并根据相应规则积极进行响应,进行初步处理和农作物病虫害识别。因此,复杂情景的农作物病虫害识别技术研究是保障农业智慧化的重要环节之一。The identification methods of crop diseases and insect pests in farmland include manual identification, MATLAB identification, machine vision identification and other transmission methods. The application scenarios and effective distances of different types of pest identification methods are often different, and the perception technology of their external performance is also different. Therefore, it is necessary to first study the image information and farmland information of crop pests and diseases. In order to more accurately and timely identify diseases and insect pests of farmland crops in complex environments, it is necessary to cooperatively perceive the surrounding environment or its own state, and perform adaptive, time-frequency domain, and multi-scale processing methods to obtain effective information from farmland soil and crop images. Information, and actively respond according to the corresponding rules, carry out preliminary treatment and identification of crop diseases and insect pests. Therefore, the research on the identification technology of crop diseases and insect pests in complex scenarios is one of the important links to ensure the intelligentization of agriculture.

综上所述,本发明结合了农业病虫害识别的实际情况,分析了导致农业病虫害的农田信息和农作物信息,针对农业病虫害识别的特点,提出了基于TensorFlow的病虫害时频域多尺度识别系统及方法:1,农作物发生土生病虫害会导致病虫害发生处与周围介质之间存在明显温度和湿度差异,从而需要采集土壤熵值,进行对比分析;2,农作物发生病虫害时的叶片特征和没发生病虫害时的叶片特征是不一样的,从而需要通过摄像头采集农田中的植物叶片图像,进行对比分析;3,拍摄的农作物图像并不能保证很清晰和精确,从而需要进行多尺度变换,锁定图像中病虫害发生的部位,减少处理量,增加处理精度;4,发生病虫害的农作物和未发生病虫害的农作物之间会有细微的差别,一般情况下人眼难以察觉,从而需要对农作物图像与农田信息进行时频域上的判断;5,将处理后的农田信息和图像信息进行综合处理,采用基于TensorFlow的方法自动改变判断阈值,自适应调整参数,得出准确的病虫害识别判断。In summary, the present invention combines the actual situation of agricultural pest identification, analyzes the farmland information and crop information that lead to agricultural pest identification, and proposes a multi-scale identification system and method based on TensorFlow in the time-frequency domain for agricultural pest identification. : 1. The occurrence of soil diseases and insect pests on crops will lead to obvious temperature and humidity differences between the place where the diseases and insect pests occur and the surrounding medium, so it is necessary to collect soil entropy values for comparative analysis; 2. The characteristics of leaves when crops are affected by diseases and insect pests The characteristics of the leaves are different, so it is necessary to collect images of plant leaves in the farmland through the camera for comparative analysis; 3. The captured images of crops cannot be guaranteed to be very clear and accurate, so multi-scale transformation is required to lock the occurrence of diseases and insect pests in the images 4. There will be subtle differences between crops with pests and diseases and crops without pests and diseases, which are generally difficult for human eyes to detect, so it is necessary to perform time-frequency domain analysis on crop images and farmland information. 5. Comprehensively process the processed farmland information and image information, adopt the method based on TensorFlow to automatically change the judgment threshold, adjust the parameters adaptively, and obtain accurate pest identification and judgment.

一种基于TensorFlow的病虫害时频域多尺度识别系统,包括基于物联网技术的OV7725摄像头子节点采集单元和LoRa传输子节点传输单元、土壤熵值传感器子节点采集单元、stm32节点处理单元等硬件平台,将采集的图像和农田信息传输至监测中心;在监测中心进行基于TensorFlow的数据统计分析方法,将农作物图像与农田信息进行分析判断,得出准确的病虫害识别判断。A TensorFlow-based time-frequency domain multi-scale identification system for pests and diseases, including OV7725 camera sub-node acquisition unit, LoRa transmission sub-node transmission unit, soil entropy sensor sub-node acquisition unit, stm32 node processing unit and other hardware platforms based on Internet of Things technology , and transmit the collected images and farmland information to the monitoring center; the TensorFlow-based data statistical analysis method is carried out in the monitoring center, and the crop images and farmland information are analyzed and judged to obtain accurate pest identification and judgment.

一种基于TensorFlow的病虫害时频域多尺度识别系统的操作方法,包括如下步骤:An operation method of a TensorFlow-based time-frequency domain multi-scale identification system for pests and diseases, comprising the following steps:

第一步,求出农作物农田的温度场和湿度场的梯度分布;The first step is to obtain the gradient distribution of the temperature field and humidity field of the crop field;

第二步,农田农作物图像多尺度变换;The second step is multi-scale transformation of farmland crop images;

第三步,农田农作物图像时域变换;The third step is time-domain transformation of farmland crop images;

第四步,农田农作物图像频域变换;The fourth step is the frequency domain transformation of farmland crop images;

第五步,数据融合,将第一步、第三步和第四步得到的农田土壤温湿度、农作物图像时频域信息使用双层传播神经网络的方法进行数据统计分析得到准确的病虫害识别判断。The fifth step is data fusion. The temperature and humidity of farmland soil obtained in the first, third and fourth steps, and the time-frequency domain information of crop images are statistically analyzed using the method of double-layer propagation neural network to obtain accurate identification and judgment of pests and diseases. .

进一步,所述的TensorFlow方法是基于自适应变化的数学统计分析处理方法。Further, the TensorFlow method is a mathematical statistical analysis and processing method based on adaptive changes.

进一步,所述求出农作物农田的温度场和湿度场的梯度分布,具体操作如下:Further, the specific operation of calculating the gradient distribution of the temperature field and the humidity field of the crop field is as follows:

农作物病虫害识别农田土壤温度采集处理过程如下:The process of collecting and processing the soil temperature of crops for identification of crop diseases and insect pests is as follows:

获取当前时刻t的当前测点n的温度值W(t)n和上一测点n-1的温度值W(t)n-1进行梯度运算,计算公式为:Obtain the temperature value W(t) n of the current measuring point n at the current moment t and the temperature value W(t) n-1 of the previous measuring point n -1 for gradient calculation, the calculation formula is:

Tt=W(t)n-W(t)n-1 T t =W(t) n -W(t) n-1

W(t)n为当前时刻t的当前测点n的温度值;W(t)n-1为当前时刻t的上一测点n-1的温度值;Tt为温度梯度值。将Tt温度梯度值计算出后,进行各个测点的梯度值描绘,作出对应点的温度梯度场分布;W(t) n is the temperature value of the current measuring point n at the current time t; W(t) n-1 is the temperature value of the previous measuring point n-1 at the current time t; T t is the temperature gradient value. After the T t temperature gradient value is calculated, the gradient value of each measuring point is described, and the temperature gradient field distribution of the corresponding point is made;

农作物病虫害识别农田土壤湿度采集处理过程如下:The process of collecting and processing soil moisture in crops for identification of crop diseases and insect pests is as follows:

获取当前时刻t的当前测点n的湿度值H(t)n和上一测点n-1的湿度值H(t)n-1进行梯度运算,计算公式为:Obtain the humidity value H(t) n of the current measuring point n at the current time t and the humidity value H(t) n-1 of the previous measuring point n-1 for gradient calculation, and the calculation formula is:

Dt=H(t)n-H(t)n-1 D t =H(t) n -H(t) n-1

H(t)n为当前时刻t的当前测点n的湿度值;H(t)n-1为当前时刻t的上一测点n-1的湿度值;Dt为湿度梯度值。将Dt湿度梯度值计算出后,进行各个测点的梯度值描绘,作出对应点的湿度梯度场分布。H(t) n is the humidity value of the current measuring point n at the current time t; H(t) n-1 is the humidity value of the previous measuring point n-1 at the current time t; D t is the humidity gradient value. After the D t humidity gradient value is calculated, the gradient value of each measuring point is depicted, and the humidity gradient field distribution of the corresponding point is made.

进一步,所述农田农作物图像多尺度变换,具体操作如下:Further, the multi-scale transformation of the crop image in the farmland, the specific operation is as follows:

农作物病虫害识别图像分割处理过程如下:The process of crop disease and insect pest recognition image segmentation is as follows:

获取占据整个空间区域的图像R,把图像分割成R1、R2、R3、R4四个区域,其过程可以表示为:Obtain the image R occupying the entire spatial area, and divide the image into four regions R 1 , R 2 , R 3 , and R 4 , the process can be expressed as:

R1+R2+R3+R4=RR 1 +R 2 +R 3 +R 4 =R

R为图像占据的整个空间区域;R1为图像分割的左上图像块,R2为图像分割的右上图像块,R3为图像分割的左下图像块,R4为图像分割的右下图像块。此次图像进行了两次分割,分割后的图层进行放大处理,即达到多尺度变换目的;R is the entire spatial area occupied by the image; R 1 is the upper left image block of image segmentation, R 2 is the upper right image block of image segmentation, R 3 is the lower left image block of image segmentation, and R 4 is the lower right image block of image segmentation. This time, the image is divided twice, and the divided layer is enlarged to achieve the purpose of multi-scale transformation;

所述农作物病虫害识别图像放大变换处理过程如下:The process of zooming in and transforming the recognition image of crop diseases and insect pests is as follows:

获取原图像的一维横坐标函数B(x),经过处理得到原图像的一维横坐标函数缩短一倍得到的函数B(x),B(2x)向右平移一个单位得到的函数B(2x-1),计算公式为:Obtain the one-dimensional abscissa function B(x) of the original image, and obtain the function B(x) obtained by shortening the one-dimensional abscissa function of the original image by one time after processing, and shift B(2x) to the right by one unit to obtain the function B( 2x-1), the calculation formula is:

A(x)=2[B(2x)+B(2x-1)]A(x)=2[B(2x)+B(2x-1)]

A(x)为图像一维横坐标上的两倍放大函数,B(x)为原图像的一维横坐标函数,B(2x)为将原图像的一维横坐标函数缩短一倍得到的函数,B(2x-1)为将B(2x)向右平移一个单位得到的函数。一维纵坐标函数同样依据此法处理,最后即可把图像放大为原来的四倍。A(x) is the double magnification function on the one-dimensional abscissa of the image, B(x) is the one-dimensional abscissa function of the original image, and B(2x) is obtained by shortening the one-dimensional abscissa function of the original image by one time function, B(2x-1) is a function obtained by shifting B(2x) to the right by one unit. The one-dimensional ordinate function is also processed according to this method, and finally the image can be enlarged to four times its original size.

本发明多尺度变换有着两次图像分割与图像放大处理,作为优选,采用了先图像分割,再图像放大,再次图像分割,再次图像放大的过程,最后得到的图像即为原图像长宽上放大了四倍,面积上放大了十六倍。自此就完成了本发明的图像多尺度变换。The multi-scale transformation of the present invention has two image segmentation and image magnification processes. As an optimization, the process of image segmentation first, image magnification, image segmentation again, and image magnification again is adopted. The final image obtained is the original image enlarged in length and width. Four times, the area is magnified sixteen times. Since then, the image multi-scale transformation of the present invention has been completed.

进一步,所述农田农作物图像时域变换,具体操作过程如下:Further, the specific operation process of the time-domain transformation of the crop image in the farmland is as follows:

获取代表农作物病虫害图像红、绿、蓝的坐标(r,g,b),r,g和b中的最大的一个max和r,g,b中最小的一个min,计算公式如下:Obtain the coordinates (r, g, b) representing the red, green, and blue images of crop diseases and insect pests, the largest max among r, g, and b and the smallest min among r, g, b, and the calculation formula is as follows:

Figure BDA0002020649960000051
Figure BDA0002020649960000051

S=((max-min)/max)*100/255S=((max-min)/max)*100/255

V=max*100/255V=max*100/255

H为图像的红、绿、蓝坐标(r,g,b)上的色调大小,S为图像的红、绿、蓝坐标(r,g,b)上的饱和度大小,V为图像的红、绿、蓝坐标(r,g,b)上的亮度大小。得到H,S,V以后,对量化后的图像的行像素和列像素进行求余,与72个颜色区域进行匹配,最后提取图像的颜色特征并与颜色库进行匹配,得出有无病虫害。H is the hue on the red, green and blue coordinates (r, g, b) of the image, S is the saturation on the red, green and blue coordinates (r, g, b) of the image, V is the red of the image , green, and blue brightness on the coordinates (r, g, b). After obtaining H, S, and V, calculate the remainder of the row and column pixels of the quantized image, and match them with 72 color regions. Finally, extract the color features of the image and match them with the color library to find out whether there are pests or diseases.

进一步,所述农田农作物图像频域变换,具体操作过程如下:Further, the specific operation process of the frequency domain transformation of the crop image in the farmland is as follows:

获取Gabor滤波器的方向和尺度μ和ν,图像像素点坐标(x,y),用z表示,频域高斯包络σ,控制高斯窗的宽度、震荡部分波长以及方向的函数kμ,ν。计算公式如下:Obtain the direction and scale μ and ν of the Gabor filter, the image pixel point coordinates (x, y), represented by z, the Gaussian envelope σ in the frequency domain, the function k μ, ν that controls the width of the Gaussian window, the wavelength of the oscillation part, and the direction . Calculated as follows:

Figure BDA0002020649960000052
Figure BDA0002020649960000052

Ψμ,v(z)为Gabor核幅值特性;μ和ν分别表示Gabor滤波器的方向和尺度;z代表图像像素点坐标(x,y);||·||表示范数;σ为高斯包络;kμ,ν控制高斯窗的宽度、震荡部分波长以及方向,定义为Ψ μ,v (z) is the Gabor kernel amplitude characteristic; μ and ν represent the direction and scale of the Gabor filter respectively; z represents the image pixel coordinates (x, y); || · || represents the norm; σ is Gaussian envelope; k μ, ν controls the width of the Gaussian window, the wavelength and direction of the oscillating part, defined as

Figure BDA0002020649960000053
Figure BDA0002020649960000053

kν=kmax/fυ为滤波器采样频率,kmax为最大频率,fυ为限制频域中核函数距离的间隔因子;k ν = k max /f υ is the sampling frequency of the filter, k max is the maximum frequency, and f υ is the interval factor limiting the distance of the kernel function in the frequency domain;

得到Gabor核幅值特性Ψμ,v(z)后,与原图像进行卷积,即可得到明显的图像特征,从而进行病虫害判断。After the Gabor kernel amplitude characteristic Ψ μ,v (z) is obtained, it is convolved with the original image to obtain obvious image features, so as to judge diseases and insect pests.

进一步,所述的步骤五是经过以上步骤分别得到农田温度场,湿度场,时域处理值,频域处理值后,使用双层传播神经网络进行数据统计分析,对输入向量Xi做的归一化处理,计算公式如下:Further, the fifth step is to obtain the farmland temperature field, humidity field, time-domain processing value, and frequency-domain processing value respectively through the above steps, and use a double-layer propagation neural network to perform data statistical analysis, and return the input vector X i to The calculation formula is as follows:

Figure BDA0002020649960000061
Figure BDA0002020649960000061

Xi为各个神经元的输入向量,即此发明的土壤湿度场,温度场,时域计算值,频域计算值。Xi *为对输入向量Xi做的归一化处理。 Xi is the input vector of each neuron, that is, the soil moisture field, temperature field, time domain calculation value, and frequency domain calculation value of this invention. X i * is the normalization processing done on the input vector X i .

并不是所有的输入神经元节点都可以将数据传入第一层神经网,只有最优值才可以,计算公式如下:Not all input neuron nodes can transmit data to the first layer of neural network, only the optimal value can, the calculation formula is as follows:

Figure BDA0002020649960000062
Figure BDA0002020649960000062

Xi *为对当前测点n从输入神经元到第一层神经网输入向量Xi做的归一化处理。

Figure BDA0002020649960000063
为当前测点n从输入神经元到第一层神经网输入向量Xi做的归一化连接加权值,归一化方法与Xi归一化方法相同。Sj(n)为Xi *
Figure BDA0002020649960000064
乘积和的最大值,也即第一层最优值。X i * is the normalization process for the current measurement point n from the input neuron to the input vector X i of the first layer of neural network.
Figure BDA0002020649960000063
It is the normalized connection weight value for the current measurement point n from the input neuron to the input vector X i of the first layer neural network. The normalization method is the same as that of Xi normalization. S j (n) is Xi * and
Figure BDA0002020649960000064
The maximum value of the sum of products, that is, the optimal value of the first layer.

为了得到第一层最优值,需要多次调整加权值,当前测点n从输入神经元到第一层神经网的连接加权值修正计算公式如下:In order to obtain the optimal value of the first layer, it is necessary to adjust the weighted value multiple times. The weighted value correction calculation formula of the connection weight value of the current measurement point n from the input neuron to the first layer of neural network is as follows:

Figure BDA0002020649960000065
Figure BDA0002020649960000065

Figure BDA0002020649960000066
为当前测点n从输入神经元到第一层神经网4个输入值的归一化连接加权值,初始赋值分别为0.1,0.2,0.3,0.4。α为学习率,随着训练不断降低,是一个0—1之间的值,初始赋值0.3。Xi *为对当前测点n从输入神经元到第一层神经网输入向量Xi做的归一化处理。
Figure BDA0002020649960000067
为修正后的当前测点n从输入神经元到第一层神经网的连接加权值。反复修正权值,直至权值不变,即
Figure BDA0002020649960000068
等于
Figure BDA0002020649960000069
此时
Figure BDA00020206499600000610
等于零。
Figure BDA0002020649960000066
It is the normalized connection weight value of the current measurement point n from the input neuron to the 4 input values of the first layer neural network, and the initial assignments are 0.1, 0.2, 0.3, 0.4 respectively. α is the learning rate, which decreases as the training continues, and is a value between 0 and 1, with an initial value of 0.3. X i * is the normalization process for the current measurement point n from the input neuron to the input vector X i of the first layer of neural network.
Figure BDA0002020649960000067
It is the weighted value of the connection from the input neuron to the first layer of the neural network for the current measurement point n after correction. Correct the weight repeatedly until the weight remains unchanged, that is,
Figure BDA0002020649960000068
equal
Figure BDA0002020649960000069
at this time
Figure BDA00020206499600000610
is equal to zero.

同样,并不是所有的第一层神经网神经元节点都可以将数据传入第二层神经网,只有最优值才可以,计算公式如下:Similarly, not all neuron nodes of the first layer of neural network can transmit data to the second layer of neural network, only the optimal value can, the calculation formula is as follows:

Figure BDA0002020649960000071
Figure BDA0002020649960000071

Vjk为当前测点n从第一层神经网神经元节点到第二层神经网的连接加权值。Sj(n)为Xi *

Figure BDA0002020649960000072
乘积和的最大值,也即第一层神经网最优值。Lj(n)为Vjk与Sj(n)乘积和的最大值,也即第二层神经网最优值;V jk is the connection weight value of the current measurement point n from the neuron node of the first layer of neural network to the second layer of neural network. S j (n) is Xi * and
Figure BDA0002020649960000072
The maximum value of the sum of products, which is the optimal value of the first layer of neural network. L j (n) is the maximum value of the product sum of V jk and S j (n), which is the optimal value of the second layer neural network;

为了得到第二层最优值,需要多次调整加权值,第一层神经网到第二层神经网之间的加权值修正计算公式如下:In order to obtain the optimal value of the second layer, it is necessary to adjust the weighted value multiple times. The calculation formula of the weighted value correction between the first layer neural network and the second layer neural network is as follows:

Vjk(n+1)=Vjk(n)+βbj[(yk-Dk]V jk (n+1)=V jk (n)+βb j [(y k -D k ]

Vjk(n)为当前测点n从第一层神经网到第二层神经网的连接加权值,初始赋值为都为0.5,0.5。β为学习率,随着训练不断降低,是一个0—1之间的值,初始赋值0.4。bj为竞争层的二值输出向量,当Lj(n)取到时,bj=1;否者bj=0。Dk为第二层神经网的预期输出值。Vjk(n+1)为修正后的当前测点n从第一层神经网到第二层神经网的连接加权值。yk为第二层神经网的实际输出值,计算公式如下:V jk (n) is the connection weight value of the current measuring point n from the first layer neural network to the second layer neural network, and the initial assignment values are both 0.5 and 0.5. β is the learning rate, which decreases as the training continues, and is a value between 0 and 1, with an initial value of 0.4. b j is the binary output vector of the competition layer, when L j (n) is obtained, b j =1; otherwise, b j =0. D k is the expected output value of the second layer neural network. V jk (n+1) is the weighted connection weight value of the current measuring point n from the first layer of neural network to the second layer of neural network after correction. y k is the actual output value of the second layer neural network, and the calculation formula is as follows:

Figure BDA0002020649960000073
Figure BDA0002020649960000073

Vkj为第一层神经网到第二层神经网的连接加权值,Vkj *为第一层神经网到第二层神经网的连接加权值Vkj与Sj(n)的乘积,当Lj(n)取到时,即第一层神经网最优值与第二层神经网连接加权值乘积和最大时,此时bj=1,Vkj *为Lj(n);否者bj=0,yk=0。V kj is the connection weight value from the first layer neural network to the second layer neural network, V kj * is the product of the connection weight value V kj and S j (n) from the first layer neural network to the second layer neural network, when When L j (n) is obtained, that is, when the product sum of the optimal value of the first layer neural network and the connection weight value of the second layer neural network is the largest, b j = 1 at this time, and V kj * is L j (n); no or b j =0, y k =0.

有益效果:Beneficial effect:

本发明克服现有复杂情景的农田农作物病虫害识别困难的问题,可以做到科学识别农田农作物病虫害。The invention overcomes the problem of difficult identification of farmland crop diseases and insect pests in existing complex scenarios, and can scientifically identify farmland crop disease and insect pests.

附图说明Description of drawings

图1为本发明的基于TensorFlow的病虫害时频域多尺度识别系统及方法流程图;Fig. 1 is the time-frequency domain multi-scale identification system and method flow chart of pests and diseases based on TensorFlow of the present invention;

图2为本发明的农田病虫害图像多尺度分割放大示意图;Fig. 2 is the enlarged schematic diagram of multi-scale segmentation of farmland diseases and insect pest images of the present invention;

图3为本发明的农田病虫害图像时域HSV模型示意图;Fig. 3 is a schematic diagram of a time-domain HSV model of farmland diseases and insect pest images of the present invention;

图4为双层传播神经网络示意图。Figure 4 is a schematic diagram of a two-layer propagation neural network.

具体实施方式Detailed ways

以下结合具体实施例对本发明作进一步的详细说明:Below in conjunction with specific embodiment the present invention is described in further detail:

图1为本发明的基于TensorFlow(基于数据流编程的符号数学系统)的病虫害时频域多尺度识别系统及方法流程图。系统首先进行初始化,测量当前测点n的土壤温湿度和农作物图像信息;然后依次进行农田土壤的温度场的梯度求解过程、农田土壤的湿度场梯度求解过程、农作物图像第一次分割过程、农作物图像第一次放大过程、农作物图像第二次分割过程、农作物图像第二次放大过程、农作物图像时域变换过程、农作物图像频域变换过程;最后经以上步骤分别得到温湿度梯度场和农作物图像时频域信息,使用双层传播神经网络方法对数据进行统计分析处理,判断出当前测点n是否发生病虫害。Fig. 1 is the time-frequency domain multi-scale identification system and method flow chart of pests and diseases based on TensorFlow (a symbolic mathematics system based on data flow programming) of the present invention. The system first initializes and measures the soil temperature and humidity of the current measuring point n and crop image information; then it performs the gradient solution process of the temperature field of the farmland soil, the gradient solution process of the humidity field of the farmland soil, the first segmentation process of the crop image, and the crop image information. The first enlargement process of the image, the second segmentation process of the crop image, the second enlargement process of the crop image, the time domain transformation process of the crop image, and the frequency domain transformation process of the crop image; finally, the temperature and humidity gradient field and the crop image are respectively obtained through the above steps For time-frequency domain information, the data is statistically analyzed and processed using the double-layer propagation neural network method to determine whether pests and diseases occur at the current measuring point n.

图2为本发明的农田病虫害图像多尺度分割放大示意图。1为第一图层,即为拍摄到的原始图像或经过简单物理处理的图像,将第一图层分为四份,取右上一块图层进行放大为2,形成第二图层,此时第一图层右上部分面积上被放大了四倍。继续将第二图层分为四份,取右上一块图层进行放大为3,形成第三图层,此时第二图层右上部分面积上被放大了四倍,第一图层右上部分面积上被放大十六倍。依照此法,可将第一图层各部分进行放大,即实现了多尺度变换。Fig. 2 is a schematic diagram of multi-scale segmentation and enlargement of farmland diseases and insect pest images according to the present invention. 1 is the first layer, which is the original image captured or the image that has undergone simple physical processing. The first layer is divided into four parts, and the upper right layer is taken and enlarged to 2 to form the second layer. The upper right part of the first layer is magnified four times in area. Continue to divide the second layer into four parts, take the upper right layer and enlarge it to 3 to form the third layer. At this time, the area of the upper right part of the second layer is enlarged by four times, and the area of the upper right part of the first layer is magnified sixteen times. According to this method, each part of the first layer can be enlarged, that is, multi-scale transformation is realized.

图3为本发明的农田病虫害图像时域HSV模型示意图。1为红色色调所在角度;2为黄色色调所在角度;3为绿色色调所在角度;4为青色色调所在角度;5为蓝色色调所在角度;6为品红色调所在角度。7为色调(H)旋转方向。8为饱和度(S)延伸方向。9为亮度(V)延伸方向。根据7(色调)、8(饱和度)、9(亮度)的组合即可将农作物图像进行分类处理。Fig. 3 is a schematic diagram of a time-domain HSV model of an image of farmland diseases and insect pests according to the present invention. 1 is the angle of the red hue; 2 is the angle of the yellow hue; 3 is the angle of the green hue; 4 is the angle of the cyan hue; 5 is the angle of the blue hue; 6 is the angle of the magenta hue. 7 is the hue (H) rotation direction. 8 is the extension direction of saturation (S). 9 is the extending direction of brightness (V). According to the combination of 7 (hue), 8 (saturation), and 9 (brightness), the crop images can be classified and processed.

图4为本发明的双层传播神经网络示意图。1为当前测点n的温度梯度值;2为当前测点n的湿度梯度值;3为当前测点n的农作物图像时域处理值;4为当前测点n的农作物图像频域处理值;5、6为判断的两种结果(是与否);7为输入神经元节点,8为第一层神经网节点,9为第二层神经网节点。测点数据从1、2、3、4节点进行优选,进行权值调整,最优值进入第一层神经网;同样经过优选,进行权值调整,最优值进入第二层神经网,最终在TensorFlows上得出测点是否发生病虫害。Fig. 4 is a schematic diagram of a two-layer propagation neural network of the present invention. 1 is the temperature gradient value of the current measuring point n; 2 is the humidity gradient value of the current measuring point n; 3 is the time domain processing value of the crop image of the current measuring point n; 4 is the frequency domain processing value of the crop image of the current measuring point n; 5 and 6 are two kinds of judgment results (yes or not); 7 is an input neuron node, 8 is a first-layer neural network node, and 9 is a second-layer neural network node. The measurement point data is optimized from nodes 1, 2, 3, and 4, and the weights are adjusted. The optimal value enters the first layer of neural network; after optimization, the weight is adjusted, and the optimal value enters the second layer of neural network. Find out whether pests and diseases have occurred at the measuring point on TensorFlows.

尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications and substitutions can be made to these embodiments without departing from the principle and spirit of the present invention. and modifications, the scope of the invention is defined by the appended claims and their equivalents.

Claims (1)

1. The operation method of the pest and disease damage time-frequency domain multi-scale identification system based on the TensorFlow is characterized in that the system comprises an OV7725 camera sub-node acquisition unit, a LoRa transmission sub-node transmission unit, a soil entropy sensor sub-node acquisition unit, a stm32 node processing unit and other hardware platforms based on the Internet of things technology, and acquired images and farmland information are transmitted to a monitoring center; a data statistical analysis method based on TensorFlow is carried out in a monitoring center, crop images and farmland information are analyzed and judged, and accurate pest and disease identification judgment is obtained;
the operation method comprises the following steps:
step one, obtaining gradient distribution of a temperature field and a humidity field of a crop farmland;
secondly, carrying out multi-scale transformation on farmland crop images;
thirdly, performing time domain transformation on farmland crop images;
fourthly, transforming the farmland crop image frequency domain;
fifthly, data fusion, namely carrying out data statistics analysis on farmland soil temperature and humidity and crop image time-frequency domain information obtained in the first step, the third step and the fourth step by using a double-layer transmission neural network method to obtain accurate plant disease and insect pest identification judgment;
the TensorFlow method is a mathematical statistical analysis processing method based on self-adaptive change;
the gradient distribution of the temperature field and the humidity field of the crop farmland is obtained, and the specific operation is as follows:
the temperature acquisition and treatment process of the crop pest and disease damage identification farmland soil is as follows:
acquiring a temperature value W (t) of a current measuring point n at a current moment t n And the temperature value W (t) of the last measuring point n-1 n-1 Gradient operation is carried out, and the calculation formula is as follows:
T t =W(t) n -W(t) n-1
W(t) n the temperature value of the current measuring point n at the current moment t; w (t) n-1 The temperature value of the last measuring point n-1 at the current time t; t (T) t For the temperature gradient value, T is t After the temperature gradient value is calculated, the gradient value of each measuring point is drawn, and the temperature gradient field distribution of the corresponding point is made;
the crop pest and disease damage identification farmland soil humidity acquisition and treatment process comprises the following steps:
acquiring a humidity value H (t) of a current measuring point n at a current moment t n And the humidity value H (t) of the last measuring point n-1 n-1 Gradient operation is carried out, and the calculation formula is as follows:
D t =H(t) n -H(t) n-1
H(t) n the humidity value of the current measuring point n at the current moment t; h (t) n-1 The humidity value of the last measuring point n-1 at the current moment t; d (D) t For the humidity gradient value, D t After the humidity gradient value is calculated, the gradient value of each measuring point is drawn, and the humidity gradient field distribution of the corresponding point is made;
the farmland crop image multi-scale transformation is operated as follows:
the crop pest identification image segmentation processing process comprises the following steps:
acquiring an image R occupying the whole spatial region, dividing the image into R 1 、R 2 、R 3 、R 4 Four regions, the process of which can be expressed as:
R 1 +R 2 +R 3 +R 4 =R
r is the whole space area occupied by the image; r is R 1 Upper left image block for image segmentation, R 2 Upper right image block for image segmentation, R 3 Lower left image block for image segmentation,R 4 For the lower right image block of the image segmentation, the image is segmented twice, and the segmented image layer is amplified;
the crop pest identification image amplification and transformation processing process comprises the following steps:
obtaining a one-dimensional abscissa function B (x) of an original image, processing to obtain a function B (x) obtained by shortening the one-dimensional abscissa function of the original image by one time, and translating the function B (2 x-1) obtained by shifting the function B (2 x) to the right by one unit, wherein the calculation formula is as follows:
A(x)=2[B(2x)+B(2x-1)]
a (x) is a double amplification function on a one-dimensional abscissa of an image, B (x) is a one-dimensional abscissa function of an original image, B (2 x) is a function obtained by shortening the one-dimensional abscissa function of the original image by one time, B (2 x-1) is a function obtained by translating B (2 x) to the right by one unit, and a one-dimensional ordinate function is treated according to the method, so that the image can be amplified to be four times of the original image finally;
the multi-scale transformation comprises two image segmentation and image amplification processes, namely image segmentation, image amplification, image segmentation, image amplification again and image amplification again, wherein the finally obtained image is four times longer and wider than the original image, and the area is sixteen times larger, so that the multi-scale transformation of the image is completed;
the farmland crop image time domain transformation specifically comprises the following steps:
coordinates (r, g, b) representing red, green and blue of the crop pest images are obtained, and the maximum one of r, g and b is max and the minimum one of r, g and b is min, and the calculation formula is as follows:
Figure FDA0004041797290000031
S=((max-min)/max)*100/255
V=max*100/255
h is the hue size on the red, green and blue coordinates (r, g, b) of the image, S is the saturation size on the red, green and blue coordinates (r, g, b) of the image, V is the brightness size on the red, green and blue coordinates (r, g, b) of the image, after H, S and V are obtained, the quantized row pixels and column pixels of the image are subjected to redundancy and are matched with 72 color areas, and finally the color characteristics of the image are extracted and matched with a color library to obtain whether diseases and insect pests exist or not;
the farmland crop image frequency domain transformation comprises the following specific operation processes:
the directions and the scales mu and v of the Gabor filter are obtained, the coordinates (x, y) of the pixel points of the image are expressed by z, the frequency domain Gaussian envelope sigma is expressed by z, and the width of the Gaussian window, the wavelength of the oscillation part and the function k of the directions are controlled μ,ν The calculation formula is as follows:
Figure FDA0004041797290000032
Ψ μ,v (z) is Gabor kernel amplitude characteristic; μ and ν represent the direction and scale, respectively, of the Gabor filter; z represents the image pixel coordinates (x, y); I.I representing norms; sigma is a gaussian envelope; k (k) μ,ν The width, the wavelength of the oscillation part and the direction of the Gaussian window are controlled and are defined as
Figure FDA0004041797290000033
k ν =k max /f υ For the filter sampling frequency, k max At maximum frequency f υ A spacing factor that limits the kernel distance in the frequency domain,
obtaining Gabor kernel amplitude characteristic ψ μ,v After (z), convolving with the original image to obtain obvious image characteristics so as to judge the plant diseases and insect pests,
further, the fifth step is to obtain farmland temperature field, humidity field, time domain processing value and frequency domain processing value respectively, and then use double-layer propagation neural network to perform data statistical analysis to input vector X i The normalization processing is carried out, and the calculation formula is as follows:
Figure FDA0004041797290000041
X i for the input vector of each neuron, namely the soil humidity field, the temperature field, the time domain calculated value, the frequency domain calculated value and X of the invention i * For input vector X i Performing normalization treatment;
not all input neuron nodes can transmit data to the first layer neural network, only the optimal value can be calculated as follows:
Figure FDA0004041797290000042
X i * to input vector X from the input neuron to the first layer neural network for the current measuring point n i Normalization processing of W j * (n) input vector X from input neuron to first layer neural network for current measurement point n i Normalized connection weighted value, normalization method and X i The normalization method is the same, S j (n) is X i * And (3) with
Figure FDA0004041797290000043
The maximum value of the product sum, i.e. the first layer optimum;
in order to obtain the first-layer optimal value, the weighting value needs to be adjusted for a plurality of times, and the connection weighting value correction calculation formula of the current measuring point n from the input neuron to the first-layer neural network is as follows:
Figure FDA0004041797290000044
Figure FDA0004041797290000045
the normalized connection weight values of the current measuring point n from the input neuron to the 4 input values of the first layer neural network are respectively 0.1,0.2,0.3,0.4,alpha is learning rate, is a value between 0 and 1 along with continuous decrease of training, and is initially assigned with 0.3, X i * To input vector X from the input neuron to the first layer neural network for the current measuring point n i The normalization processing is carried out, so that the data of the data are processed,
Figure FDA0004041797290000046
for the connection weighted value of the corrected current measuring point n from the input neuron to the first layer neural network, repeatedly correcting the weighted value until the weighted value is unchanged, namely +.>
Figure FDA0004041797290000047
Equal to->
Figure FDA0004041797290000048
At this time->
Figure FDA0004041797290000049
Equal to zero;
also, not all first layer neural network neuron nodes can transmit data to the second layer neural network, only the optimal value can be calculated as follows:
Figure FDA00040417972900000410
V jk for the connection weight value of the current measuring point n from the first layer neural network neuron node to the second layer neural network, S j (n) is X i * And (3) with
Figure FDA00040417972900000411
The maximum value of the product sum, i.e. the first layer neural net optimum, L j (n) is V jk And S is equal to j (n) the maximum value of the product of the sums, i.e., the second layer neural net optimum;
in order to obtain the second-layer optimal value, the weighted value needs to be adjusted for a plurality of times, and the weighted value correction calculation formula between the first-layer neural network and the second-layer neural network is as follows:
V jk (n+1)=V jk (n)+βb j [(y k -D k ]
V jk (n) the connection weight value of the current measuring point n from the first layer neural network to the second layer neural network is 0.5, the initial assignment is 0.5, the beta is the learning rate, the value between 0 and 1 is continuously reduced along with training, the initial assignment is 0.4, b j As the binary output vector of the competitive layer, when L j (n) when taken, b j =1; whether or not b j =0,D k V is the expected output value of the second layer neural network jk (n+1) is the connection weighted value of the corrected current measuring point n from the first layer neural network to the second layer neural network, y k For the actual output value of the second-layer neural network, the calculation formula is as follows:
Figure FDA0004041797290000051
V kj weighting the connection of the first layer of neural network to the second layer of neural network, V kj * Weighting value V for connection of a first layer of neural network to a second layer of neural network kj And S is equal to j (n) when L j (n) when the sum of the weighted value multiplication of the first layer neural network optimal value and the second layer neural network connection weighted value is maximum, b j =1,V kj * Is L j (n); whether or not b j =0,y k =0。
CN201910277867.4A 2019-04-08 2019-04-08 TensorFlow-based pest and disease damage time-frequency domain multi-scale identification system and operation method Active CN110084145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910277867.4A CN110084145B (en) 2019-04-08 2019-04-08 TensorFlow-based pest and disease damage time-frequency domain multi-scale identification system and operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910277867.4A CN110084145B (en) 2019-04-08 2019-04-08 TensorFlow-based pest and disease damage time-frequency domain multi-scale identification system and operation method

Publications (2)

Publication Number Publication Date
CN110084145A CN110084145A (en) 2019-08-02
CN110084145B true CN110084145B (en) 2023-05-12

Family

ID=67414498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910277867.4A Active CN110084145B (en) 2019-04-08 2019-04-08 TensorFlow-based pest and disease damage time-frequency domain multi-scale identification system and operation method

Country Status (1)

Country Link
CN (1) CN110084145B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111107530A (en) * 2019-12-06 2020-05-05 深圳大学 Agricultural pest control system based on LoRa technology

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084794A (en) * 2010-10-22 2011-06-08 华南农业大学 Method and device for early detecting crop pests based on multisensor information fusion
CN102523953A (en) * 2011-12-07 2012-07-04 北京农业信息技术研究中心 Crop information fusion method and disease monitoring system
CN102945376A (en) * 2012-09-28 2013-02-27 北京农业信息技术研究中心 Method for diagnosing crops diseases
CN103489006A (en) * 2013-10-11 2014-01-01 河南城建学院 Computer vision-based rice disease, pest and weed diagnostic method
CN105825177A (en) * 2016-03-09 2016-08-03 西安科技大学 Remote-sensing crop disease identification method based on time phase and spectrum information and habitat condition
CN106202489A (en) * 2016-07-20 2016-12-07 青岛云智环境数据管理有限公司 A kind of agricultural pest intelligent diagnosis system based on big data
CN107862687A (en) * 2017-11-07 2018-03-30 潘柏霖 A kind of early warning system for being used to monitor agricultural pest
CN108009936A (en) * 2017-10-31 2018-05-08 四川农业大学 Pest and disease monitoring system based on internet of things
CN108960310A (en) * 2018-06-25 2018-12-07 北京普惠三农科技有限公司 A kind of agricultural pest recognition methods based on artificial intelligence
CN109470299A (en) * 2018-10-19 2019-03-15 江苏大学 A system and method for monitoring crop growth information based on the Internet of Things

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9898688B2 (en) * 2016-06-01 2018-02-20 Intel Corporation Vision enhanced drones for precision farming

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084794A (en) * 2010-10-22 2011-06-08 华南农业大学 Method and device for early detecting crop pests based on multisensor information fusion
CN102523953A (en) * 2011-12-07 2012-07-04 北京农业信息技术研究中心 Crop information fusion method and disease monitoring system
CN102945376A (en) * 2012-09-28 2013-02-27 北京农业信息技术研究中心 Method for diagnosing crops diseases
CN103489006A (en) * 2013-10-11 2014-01-01 河南城建学院 Computer vision-based rice disease, pest and weed diagnostic method
CN105825177A (en) * 2016-03-09 2016-08-03 西安科技大学 Remote-sensing crop disease identification method based on time phase and spectrum information and habitat condition
CN106202489A (en) * 2016-07-20 2016-12-07 青岛云智环境数据管理有限公司 A kind of agricultural pest intelligent diagnosis system based on big data
CN108009936A (en) * 2017-10-31 2018-05-08 四川农业大学 Pest and disease monitoring system based on internet of things
CN107862687A (en) * 2017-11-07 2018-03-30 潘柏霖 A kind of early warning system for being used to monitor agricultural pest
CN108960310A (en) * 2018-06-25 2018-12-07 北京普惠三农科技有限公司 A kind of agricultural pest recognition methods based on artificial intelligence
CN109470299A (en) * 2018-10-19 2019-03-15 江苏大学 A system and method for monitoring crop growth information based on the Internet of Things

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Crops Disease Diagnosing Using Image-Based Deep Learning Mechanism》;Hyeon Park等;《2018 International Conference on Computing and Network Communications 》;20181231;第23-26页 *
《基于计算机视觉的农作物病害识别方法的研究》;宋凯;《中国博士学位论文全文数据库 信息科技辑》;20100115(第1期);第I138-29页 *

Also Published As

Publication number Publication date
CN110084145A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
Chandel et al. Identifying crop water stress using deep learning models
Sannakki et al. Diagnosis and classification of grape leaf diseases using neural networks
Kirola et al. Plants diseases prediction framework: a image-based system using deep learning
Tiwari et al. An experimental set up for utilizing convolutional neural network in automated weed detection
CN117496356A (en) Agricultural artificial intelligent crop detection method and system
Suresh et al. Performance analysis of different CNN architecture with different optimisers for plant disease classification
Grace et al. Crop and weed classification using deep learning
Yakkundimath et al. Automatic methods for classification of visual based viral and bacterial disease symptoms in plants
Kumar et al. Automatic leaf disease detection and classification using hybrid features and supervised classifier
Jasim High-performance deep learning to detection and tracking tomato plant leaf predict disease and expert systems
Kumar et al. Combining Weather Classification and Mask RCNN for Accurate Wheat Rust Disease Prediction
Kanupuru et al. A deep learning approach to detect the spoiled fruits
CN110084145B (en) TensorFlow-based pest and disease damage time-frequency domain multi-scale identification system and operation method
Yano et al. Choosing classifier for weed identification in sugarcane fields through images taken by UAV.
Kodali et al. Tomato plant leaf disease detection using CNN
Ke et al. Intelligent vineyard blade density measurement method incorporating a lightweight vision transformer
Rony et al. BottleNet18: Deep Learning-Based Bottle Gourd Leaf Disease Classification
Nyarko et al. Tomato fruit disease detection based on improved single shot detection algorithm
Santhosh Kumar et al. Review on disease detection of plants using image processing and machine learning techniques
Bhosale et al. Tomato Plant Disease Identification via Deep Learning Technique
Deisy et al. Image segmentation for feature extraction: A study on disease diagnosis in agricultural plants
Entuni et al. Identification of corn leaf diseases comprising of blight, grey spot, and rust using DenseNet-201
Arinichev et al. Digital solutions in the system of intelligent crop monitoring
Jayapriya et al. Detection of maize stem and leaf diseases using edge detection method to prevent the crops from diseases
Dixit et al. Approaches to identify paddy plant diseases using deep learning: A review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant