[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109102004A - Cotton-plant pest-insects method for identifying and classifying and device - Google Patents

Cotton-plant pest-insects method for identifying and classifying and device Download PDF

Info

Publication number
CN109102004A
CN109102004A CN201810812487.1A CN201810812487A CN109102004A CN 109102004 A CN109102004 A CN 109102004A CN 201810812487 A CN201810812487 A CN 201810812487A CN 109102004 A CN109102004 A CN 109102004A
Authority
CN
China
Prior art keywords
cotton
image
wing
pixel
classified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810812487.1A
Other languages
Chinese (zh)
Inventor
曲海平
张颖
岳峻
李振波
寇光杰
张志旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ludong University
Original Assignee
Ludong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ludong University filed Critical Ludong University
Priority to CN201810812487.1A priority Critical patent/CN109102004A/en
Publication of CN109102004A publication Critical patent/CN109102004A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a kind of cotton-plant pest-insects method for identifying and classifying and device, the method obtain include cotton-plant pest-insects to be sorted original image, obtain the Hu of the original image not overall profile characteristic parameter of bending moment parameter and cotton-plant pest-insects to be sorted;Based on Otsu Threshold Segmentation Algorithm and Canny edge detection algorithm by the background separation of cotton-plant pest-insects image and original image to be sorted, wing image is extracted from original image, the wing contour feature parameter of cotton-plant pest-insects to be sorted is obtained according to wing image;Wing image is optimized using mathematical Morphology Algorithm, and the corresponding math-morphological features parameter of the wing for extracting cotton-plant pest-insects to be sorted;Identification classification is carried out to cotton-plant pest-insects to be sorted using radial basis function neural network.This method and device so that the identification classification of cotton-plant pest-insects is more accurate, and then plays the role of great, reduces economic loss caused by cotton-plant pest-insects to the prevention and treatment of the specific aim of cotton-plant pest-insects.

Description

Cotton pest identification and classification method and device
Technical Field
The embodiment of the invention relates to the technical field of digital image processing, in particular to a cotton pest identification and classification method and device.
Background
Cotton is an important economic crop, and with the rapid development of the cotton industry, the cotton technology makes a new breakthrough; however, the technology for preventing and controlling the cotton diseases and insect pests is not greatly improved all the time, so that the loss caused by the cotton diseases and insect pests is huge. The variety and the quantity of pests in the cotton field are large, the laggard identification mode of cotton pests is an important reason for the slow development of the cotton pest control technology, and the current identification and classification method of the cotton pests is single, so that a scientific, reasonable and accurate cotton pest classification method is lacked.
In the prior art, a method for identifying and classifying cotton aphids is provided, salient feature points in cotton aphid images are extracted through two local feature operators, namely Scale-invariant feature transform (SIFT) and Speeded Up Robust Features (SURF), and the euclidean distances of feature point descriptors are compared on a feature point space to measure the similarity between different classes of objects, so that the identification and classification of the cotton aphids are realized. Because the SIFT and SURF local feature algorithms are operated on the gray level image, the description capability of the contour is not strong enough, and the identification effect on the insect image with a complex background is poor. Moreover, the similarity between different classes of objects is measured by adopting the Euclidean distance, only the distance difference between the different classes of objects is theoretically considered, but the influence of other factors on the identification and classification of the Aphis gossypii Glover the world is not considered, so that the final classification effect is poor.
The prior art also provides a method for identifying cotton pests, and various digital image processing technologies and means are selected to carry out relevant discussion on pest classification methods according to different external morphological characteristics of different types of cotton pests. Extracting external morphological characteristic parameters of cotton pests, judging whether the cotton pests accord with normal distribution by using a Kolmogorov test method, and then performing classification discussion according to a judgment result; classifying cotton pests coincidently and basically normally distributed by using a fuzzy clustering algorithm, and classifying the non-conforming pests by using a binary tree and support vector machine classification method to obtain a final classification result. However, because the above scheme adopts the fuzzy clustering algorithm, the method needs to know the classification number in advance, and if the classification number is not reasonable, recalculation is needed, which results in an excessively large calculation amount.
Disclosure of Invention
In order to overcome the problems or at least partially solve the problems, the embodiment of the invention provides a cotton pest identification and classification method and device.
In one aspect, an embodiment of the present invention provides a cotton pest identification and classification method, including:
s1, acquiring an original image containing cotton pests to be classified, and acquiring Hu invariant moment parameters of the original image;
s2, acquiring overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm, acquiring wing images of the cotton pests to be classified based on an Otsu threshold segmentation algorithm and the Canny edge detection algorithm, and extracting the wing contour characteristic parameters of the cotton pests to be classified from the wing images;
s3, optimizing the wing image based on a mathematical morphology algorithm, and determining mathematical morphology characteristic parameters corresponding to the wings of the cotton pests to be classified based on the optimized wing image;
s4, processing the Hu invariant moment parameter, the overall contour characteristic parameter, the wing contour characteristic parameter and the mathematical morphology characteristic parameter based on a fuzzy clustering FCM algorithm, inputting the four processed parameters into a radial basis function neural network, and outputting the category of the cotton pest to be classified by the radial basis function neural network.
On the other hand, the embodiment of the invention provides a cotton pest identification and classification device, which comprises:
the Hu invariant moment parameter acquisition module is used for acquiring an original image containing cotton pests to be classified and acquiring Hu invariant moment parameters of the original image;
the contour characteristic parameter acquisition module is used for acquiring the overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm, acquiring wing images of the cotton pests to be classified based on an Otsu threshold segmentation algorithm and the Canny edge detection algorithm, and extracting the wing contour characteristic parameters of the cotton pests to be classified from the wing images;
the mathematical morphological characteristic parameter acquisition module is used for optimizing the wing image based on a mathematical morphological algorithm and determining mathematical morphological characteristic parameters corresponding to the wings of the cotton pests to be classified based on the optimized wing image;
and the category determining module is used for processing the Hu invariant moment parameter, the overall contour characteristic parameter, the wing contour characteristic parameter and the mathematical morphology characteristic parameter based on a fuzzy clustering FCM algorithm, inputting the four processed parameters into a radial basis function neural network, and outputting the category of the cotton pests to be classified by the radial basis function neural network.
In another aspect, an embodiment of the present invention provides a cotton pest identification and classification device, including:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method described above.
In another aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method described above.
The cotton pest identification and classification method and device provided by the embodiment of the invention can enable identification and classification of cotton pests to be more accurate, further have a great effect on targeted control of the cotton pests, and greatly reduce economic loss caused by the cotton pests.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for identifying and classifying cotton pests according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a cotton pest identification and classification device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiment of the invention, in order to better combine machine vision and biological control methods and improve the pest control effect in view of the defects of a chemical control method in the field pest control problem, the embodiment of the invention provides a cotton pest identification and classification method based on digital image processing.
As shown in fig. 1, an embodiment of the present invention provides a method for identifying and classifying cotton pests, including:
s1, acquiring an original image containing cotton pests to be classified, and acquiring Hu invariant moment parameters of the original image;
s2, acquiring overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm, acquiring wing images of the cotton pests to be classified based on an Otsu threshold segmentation algorithm and the Canny edge detection algorithm, and extracting the wing contour characteristic parameters of the cotton pests to be classified from the wing images;
s3, optimizing the wing image based on a mathematical morphology algorithm, and determining mathematical morphology characteristic parameters corresponding to the wings of the cotton pests to be classified based on the optimized wing image;
s4, processing the Hu invariant moment parameter, the overall contour characteristic parameter, the wing contour characteristic parameter and the mathematical morphology characteristic parameter based on a fuzzy clustering FCM algorithm, inputting the four processed parameters into a radial basis function neural network, and outputting the category of the cotton pest to be classified by the radial basis function neural network.
Specifically, explanation is first made regarding terms appearing in the embodiments of the present invention:
hu invariant moment: the geometric moment is the geometric invariant moment of the image, and the moment features mainly represent the geometric features of the image region, which are called as the geometric moment and are also called as invariant moments because of the invariant features with the characteristics of rotation, translation, scale and the like.
Canny edge detection algorithm: the Canny edge detection algorithm is an algorithm which applies an optimization idea to image processing, can eliminate false edges caused by noise interference when an image is analyzed and processed, ensures the accuracy of detected edge information and improves the edge detection precision.
Otsu threshold segmentation algorithm: dividing the gray scale into two or more equal-interval or unequal-interval gray scale intervals, and determining whether each pixel point in the image belongs to a target area or a background area by using whether the characteristic attribute of the pixel point in the image meets the requirement of a threshold value so as to generate a binary image. Since there is a significant difference in gray level between an object and a background and between different objects, a significant peak appears in a gray level histogram of an image. When image gray scale histogramWhen the peak distribution is obvious, the valley is often used as the threshold candidate. Let H be {0, L-1}, where H represents a set of pixel grayscale value ranges, and L represents a grayscale level of the image a (x, y). The frequency of occurrence of the gray value of each pixel is PiThe threshold T is to classify pixels in the image into two categories according to the size of the pixel gray value: c00, T and C11, { T +1, L-1}, wherein C0And C1Respectively representing a collection of two types of pixels.
LASSO Linear regression: typically the LASSO penalty may be expressed as the sum of the minimum residual squared sum and an additional penalty term. Similar to subset selection, this method can produce an interpretable model, which also exhibits the stability of ridge regression.
Fuzzy clustering (Fuzzy c-means, FCM) algorithm: the Fuzzy C-means algorithm is called FCM algorithm for short, is a Fuzzy clustering algorithm based on an objective function, and is mainly used for clustering analysis of data.
Radial Basis Function (RBF) neural network: the RBF neural network structure is similar to a multilayer forward network and is composed of an input layer, a hidden layer and an output layer. The first layer is an input layer, which is composed of signal source nodes and transmits signals to the hidden layer. The second layer is a hidden layer, and the transformation function of the nodes of the hidden layer is a nonnegative nonlinear function which is radially symmetrical to the central point and is attenuated. The third layer is the output layer, which responds to the input pattern with a simple linear function. After the FCM algorithm is used for processing the original data and generating the radial basis function, learning the connection weight parameters from the hidden layer to the output layer. In order to reduce the complexity of the RBF neural network, LASSO punishment is integrated into the solution of the connection weight parameters from the hidden layer to the output layer, so that the nodes of the hidden layer are extracted, and the complexity of the neural network is reduced.
The method comprises the steps of firstly obtaining an original image containing cotton pests to be classified, extracting integral characteristic parameters of the original image, wherein the integral characteristic parameters comprise Hu invariant moment parameters and integral contour characteristic parameters of the original image, firstly obtaining the Hu invariant moment parameters of the original image, and then obtaining the integral contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm; separating the cotton pest image to be classified from the background of the original image by using an Otsu threshold segmentation algorithm and a Canny edge detection algorithm, acquiring a wing image of the cotton pest to be classified, and extracting wing contour characteristic parameters of the cotton pest to be classified in the original image from the wing image; optimizing the wing image based on a mathematical morphology algorithm, and extracting mathematical morphology characteristic parameters corresponding to the wings of the cotton pests to be classified based on the optimized wing image; and finally, identifying and classifying the cotton pests to be classified by utilizing the radial basis function neural network. The wing contour characteristic parameters and the mathematical morphology characteristic parameters corresponding to the wings can be used as local characteristic parameters.
The cotton pest identification and classification method provided by the embodiment of the invention enables identification and classification of cotton pests to be more accurate, further has a great effect on targeted control of the cotton pests, and greatly reduces economic loss caused by the cotton pests.
Based on the above embodiment, the raw image obtained in S1 containing the cotton pest to be classified may be obtained by a camera or other image obtaining device.
The method for acquiring the Hu invariant moment parameters of the original image specifically comprises the following steps: 7 Hu invariant moment parameter values of the original image are acquired. It should be noted here that one of the core problems of image recognition is feature extraction of an image, which is described simply as: the whole image is described by a set of image descriptors, and the simpler the set of image descriptors, the more representative the set of image descriptors, the better the set of image descriptors. Good image descriptors are not disturbed by light, noise, geometric distortions. Since the Hu invariant moments have translation, grayscale, scale and rotation invariances, the 7 Hu invariant moments are used as one main characteristic parameter of the original image in the embodiment of the present invention.
Moments are a numerical feature of random variables. Let X be a random variable, c be a constant, and k be a positive integer. Then the quantity E [ (X-c)k]Referred to as X inThe k-th moment of point c. Wherein, a when c is 0k=E(Xk) K-th origin moment called X, when c ═ E (X) & muk=E[(X-E(X))k]Called the k-th central moment of X.
First order origin moment, i.e. desired, first order central moment, mu10; second order central moment mu2A variance Var (X) of X; third order central moment mu3Can be used to measure whether the distribution is biased; fourth order central moment mu4Can be used to measure how steep the distribution (density) is around the mean.
Regarding an image, the coordinates of pixels are regarded as a two-dimensional random variable (X, Y), and a gray image can be represented by a two-dimensional gray density function, so that the gray image can be characterized by moments. The original image adopted in the embodiment of the present invention is a grayscale image.
In the embodiment of the invention, the region where the cotton pest image to be classified is located in the original image is taken as a target region, and for the target region, the (m + n) moment of the target region g (x, y) is defined as:
the central moment of the target area g (x, y) is defined as:
wherein,the abscissa and ordinate of the centroid of the target area g (x, y),is the position of the centroid of the target area g (x, y). The normalized (p + q) moment of the target region g (x, y) can then be defined as:
in the above formula:
in the embodiment of the present invention, 7 invariant Hu moments are respectively constructed by using the normalized 7 central moments, as shown in equations (5) to (11):
in general, 7 Hu invariant moments can be used to identify relatively obvious objects in an image, but the identification effect is not good for an image with excessively complex texture features, so in order to avoid this, in this embodiment, an overall contour feature parameter of an original image is further obtained based on a Canny edge detection algorithm, so that the feature extraction effect is better.
On the basis of the above embodiment, the obtaining of the overall contour characteristic parameters of the cotton pests to be classified based on the Canny edge detection algorithm in S2 specifically includes:
s201, respectively acquiring a first gradient amplitude component of a pixel gray value in a first direction and a second gradient amplitude component of a pixel gray value in a second direction in the original image based on a first-order difference operator, wherein the first direction and the second direction are perpendicular to each other;
s202, determining a gradient amplitude of each pixel and a gradient direction corresponding to each pixel in the original image based on the first gradient amplitude component and the second gradient amplitude component, wherein the pixel with the maximum gradient amplitude in the gradient direction is an edge pixel of the original image, and all edge pixels of the original image form an edge image;
s203, traversing each pixel in the edge image, and based on the magnitude relation between the gradient amplitude of each pixel and the gradient amplitudes of two adjacent pixels in the gradient direction corresponding to each pixel, refining the edge contour of the original image into a pixel width, and obtaining an edge contour image;
s204, based on a Canny edge detection algorithm, extracting the overall contour characteristic parameters of the edge contour image by adopting a self-adaptive threshold value, and taking the overall contour characteristic parameters of the edge contour image as the overall contour characteristic parameters of the cotton pests to be classified.
Specifically, the current edge detection algorithms include an edge detection algorithm based on a Robert operator, an edge detection algorithm based on a Sobel operator, an edge detection algorithm based on a Prewitt operator, an edge detection algorithm based on an LOG operator, and an edge detection algorithm based on a Canny operator. According to a great deal of detection, in the technical field related to the invention, the Canny operator-based edge detection algorithm has a better effect on processing the picture contour than the edge detection algorithms based on other operators, so that the Canny operator-based edge detection algorithm is adopted in the embodiment of the invention to extract the overall contour features, namely the Canny edge detection algorithm is adopted to extract the overall contour features.
In the embodiment of the invention, the horizontal direction is selected as the first direction, and the vertical direction is selected as the second direction for explanation. Firstly, a first-order difference operator is used for calculating a first gradient amplitude component of a pixel gray value in the horizontal direction in an original image, a second gradient amplitude component of the pixel gray value in the vertical direction in the original image is calculated by the first-order difference operator, so that a gradient amplitude M and a corresponding gradient direction e of the pixel gray value of each pixel in the horizontal direction and the vertical direction in the original image are obtained, and in order to improve the calculation accuracy of the gradient, the gradient amplitude and the corresponding gradient direction are calculated for each pixel in the original image by using a Sobel operator or a Roberts operator and the like. The pixel with the maximum gradient amplitude in the gradient direction is the edge pixel, and all the edge pixels in the original image form the edge image.
In order to refine the detected image edge, each pixel in the edge image is traversed, the gradient amplitudes of two adjacent pixels in the gradient direction of the current pixel are calculated through interpolation, if the gradient amplitude of the current pixel is larger than or equal to the two gradient amplitudes, the current pixel can be determined to be an edge pixel, otherwise, the current pixel is a non-edge pixel. After this step, the edge contour of the original image can be reduced to a pixel width, that is, the edge image is subjected to non-maximum suppression to obtain an edge contour image.
Based on a Canny edge detection algorithm, an edge contour image is extracted by adopting a self-adaptive threshold value T, edges are continuously searched and tracked, and edge information of cotton pests to be classified is comprehensively obtained.
On the basis of the above embodiment, S201 further includes:
removing pulse noise in the original image based on a switch type median filtering algorithm to obtain a noise-removed image;
and based on a Gaussian smoothing filter, performing fuzzification processing on the de-noised image to obtain a smooth image.
Specifically, firstly, the impulse noise in the original image is removed by using a switching median filtering algorithm, wherein the impulse noise mainly comprises impulse noise and strong wave noise (i.e. pressure wave noise), and only the impulse noise is considered in the embodiment of the invention. Because the impact noise has great influence on the result of the edge detection, the noise in the image is detected by adopting the Laplace operator in four directions, then the result obtained by the switch-type median filter is used for replacing the pixel gray value of the noise pixel in the original image, and the continuity of the edge in the direction is considered by the switch-type median filtering algorithm, so that the edge detail information of the image is kept, and a good denoising effect is obtained.
The four directional laplacian operators K1, K2, K3, K4 in the switch-mode median filtering algorithm are as follows:
wherein, K1Is the Laplacian of 0 degree direction, K2Is a 45 degree directional Laplacian, K3Is 90 degree Laplacian, K4Is the 135 degree directional laplacian.
Firstly, each pixel in the original image is convolved with 4 different laplacian operators respectively to obtain 4 convolved images. At the pixel point (i, j), detecting whether the current pixel is a noise pixel by using the minimum value r (i, j) of the absolute value of each convolution result, wherein the calculation formula is as follows:
indicating that at pixel point I (I, j), I (I, j) is convolved with the pth laplacian. The formula for judging whether the current pixel is a noise pixel is as follows:
n (I, j) ═ 1 indicates that the pixel point I (I, j) is a noise pixel, that is, when r (I, j) is greater than the threshold T, the current pixel is a noise pixel, and the result of the switching median filtering is used to replace the pixel gray value of the noise pixel, otherwise, the current pixel is a signal pixel, and the pixel gray value remains unchanged.
The formula for the output of the switched median filter is as follows:
and obtaining a noise-removed image after the noise-removed image is processed by a switch type median filtering algorithm.
Because any edge detection algorithm cannot process the de-noised image well, the embodiment of the invention adopts a Gaussian smoothing filter to convolve the de-noised image with a Gaussian mask to obtain a smooth image, thereby realizing the fuzzification processing (namely, the removal of details and noise) of the image.
The gaussian smoothing filter is a very effective low-pass filter in both the spatial domain and the frequency domain, and effectively utilizes the property of the gaussian function, and is widely accepted by engineers in actual image processing, namely:
the two-dimensional Gaussian function has rotational symmetry;
the gaussian function is a single valued function;
③ the Fourier transform spectrum of the Gaussian function is single-lobed;
the width (determining the smoothing degree) of the Gaussian filter is characterized by a parameter sigma, and the larger the frequency band parameter sigma of the Gaussian filter is, namely the wider the frequency band is, the better the smoothing degree is;
due to the separability of the Gaussian function, the adaptive threshold is calculated.
For images, the gaussian filter is a 2-dimensional convolution operator using a gaussian kernel to perform image blurring (i.e., to remove detail and noise).
The one-dimensional gaussian distribution formula is:
the two-dimensional gaussian distribution formula is:
after obtaining the smoothed image, S201 is changed to S201': and respectively acquiring a first gradient amplitude component of the pixel gray value in the first direction and a second gradient amplitude component of the pixel gray value in the second direction in the smoothed image based on a first-order difference operator, wherein the first direction and the second direction are perpendicular to each other, the first direction is the x direction, and the second direction is the y direction.
Accordingly, S202 becomes S202': and determining the gradient amplitude of each pixel in the smoothed image and the gradient direction corresponding to each pixel based on the first gradient amplitude component and the second gradient amplitude component, wherein the pixel with the maximum gradient amplitude in the gradient direction is an edge pixel of the smoothed image, and all edge pixels of the smoothed image form the edge image.
Specifically, the gradient magnitude of each pixel in the smoothed image is determined by the following formula:
wherein G isxIs the first gradient magnitude component, GyIs the second gradient magnitude component.
The angle between the gradient direction corresponding to each pixel in the smoothed image and the second direction (i.e. the vertical direction y) is determined by the following formula:
accordingly, S203 becomes S203': traversing each pixel in the edge image, and based on the magnitude relation between the gradient amplitude of each pixel and the gradient amplitudes of two adjacent pixels in the gradient direction corresponding to each pixel, refining the edge contour of the smooth image into a pixel width, and obtaining an edge contour image.
Accordingly, S204 becomes S204': and extracting the overall contour characteristic parameters of the edge contour image by adopting a self-adaptive threshold value based on a Canny edge detection algorithm.
On the basis of the above embodiment, the adaptive threshold in S204 or S204' is specifically obtained as follows:
s2041, segmenting the original image based on a threshold estimation value, and dividing pixels in the original image into a first group and a second group according to the size of a pixel gray value; the gray value of each pixel in the first group is greater than or equal to a preset value, and the gray value of each pixel in the second group is less than the preset value;
s2042, calculating a first average pixel gray value and a second average pixel gray value of all pixels in the first group and the second group respectively, and calculating an average value of the first average pixel gray value and the second average pixel gray value;
s2043, if the difference between the preset value and the average value is smaller than the threshold estimation value, using the preset value as the adaptive threshold, otherwise, updating the preset value, and repeating S2041 and S2042 until the difference between the preset value and the average value is smaller than the threshold estimation value.
Specifically, the threshold value is below which the gradient amplitude is measured, and is considered as noise. Above this threshold, it is assumed that there may be a useful portion of the signal, which is left. The method for obtaining the self-adaptive threshold value is utilized in the embodiment of the invention, the method can obtain the optimal threshold value of any picture, and then obtain the most complete image contour, and is more convenient than manual threshold value setting. Through repeated experiments, different threshold values are selected until the complete contour of the image is obtained. The iterative steps for automatically determining the adaptive threshold are as follows:
1) randomly selecting a threshold estimation value T0In the embodiment of the present invention, as a preferable scheme, the threshold estimation value may be selected as an intermediate value between a maximum pixel gray value and a minimum pixel gray value in the original image.
2) Using the threshold estimate T0The original image is segmented and a preset value T is randomly selected, and two types of pixels are generated at the moment: the original image has a first group G1 composed of all pixels with pixel gray values equal to or greater than T, and a second group G2 composed of all pixels with pixel gray values less than T.
Respectively calculating first average pixel gray value T of all pixels in G1 range2And a second average pixel gray value T of all pixels in the range of G23
3) Calculating T2And T3Average value of (A) T4=(T2+T3) 2 and comparing T-T4And T0If T-T4<T0If T is the finally obtained adaptive threshold; if T-T4≥T0Updating the value of T, and repeatedly executing the steps 2) and 3) until T-T4<T0And T is the finally obtained adaptive threshold.
Wherein, T and T4The smaller the difference value is, the larger the difference between the cotton pest image to be classified and the background in the original image is, and when the T-T is used as the T-T4Less than T0Then, the difference between the cotton pest image to be classified and the background in the original image is maximum, and the obtained T is the most appropriate threshold value.
On the basis of the above embodiment, the obtaining of the wing image of the cotton pest to be classified based on Otsu threshold segmentation algorithm and Canny edge detection algorithm and the extracting of the wing contour feature parameters of the cotton pest to be classified from the wing image specifically include:
after acquiring the overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm, separating wing images of the cotton pests to be classified from images obtained after Canny edge detection algorithm processing by adopting an optimal threshold based on an Otsu threshold segmentation algorithm; and then extracting wing contour characteristic parameters of the cotton pests to be classified in the wing images based on a Canny edge detection algorithm.
Specifically, certain kind of insects and other kinds of insects are similar in overall external form, and extracted mathematical form parameters are also similar, so that the later-stage classification and identification are influenced, and the classification and identification error rate is improved. Therefore, in order to prevent the error rate from being improved, the mathematical parameters can be extracted from the individual parts of the insect bodies, so that the insects which are not well classified through the global parameters can be more accurately classified according to the local parameters. Therefore, in the embodiment of the invention, after the Canny edge detection algorithm is adopted to extract the overall contour characteristic parameters of the original image, the Otsu threshold segmentation algorithm is adopted to separate the wings of the cotton pests to be classified in the image (namely the cotton pest image to be classified) obtained after the Canny edge detection algorithm is adopted, so as to obtain the wing image of the cotton pests to be classified. And then extracting wing contour characteristic parameters of the cotton pests to be classified in the wing images by a Canny edge detection algorithm.
That is, for the wing image of the cotton pest to be classified acquired in S2, on the basis of Canny edge detection, the wing image of the cotton pest to be classified is segmented from the image obtained after being processed by the Canny edge detection algorithm (i.e., the cotton pest image to be classified) by using an Otsu threshold segmentation algorithm.
The optimal threshold used in the Otsu threshold segmentation algorithm is determined by the following method:
s211, dividing the pixels in the original image into a third group and a fourth group according to the size of the pixel gray value; the gray value of each pixel in the third group is less than or equal to a threshold, and the gray value of each pixel in the fourth group is greater than the threshold;
s212, respectively determining a first proportion of pixel gray values of all pixels in the third group and a second proportion of pixel gray values of all pixels in the fourth group based on the value probability of each pixel gray value;
s213, respectively determining a first mean value of the pixel gray scale values of all the pixels in the third group and a second mean value of the pixel gray scale values of all the pixels in the fourth group based on the first ratio and the second ratio, and determining a third mean value of the pixel gray scale values of all the pixels in the original image based on the first mean value and the second mean value;
s214, determining the inter-group variance and the sum of the intra-group variances of the pixel gray values of all the pixels in the original image based on the first proportion, the second proportion, the first mean and the second mean;
s215, determining the optimal threshold value based on the threshold value, the sum of the intra-group variances and the inter-group variance.
Assuming that the gray levels of the original image share L levels, the corresponding pixel gray values are 0, 1, …, L-2, L-1, respectively. The total number of pixels is N, and N (i) is the number of pixels with a pixel gray value of i, then the probability of taking a value as a certain pixel gray value (i.e. the probability of taking a value) is: p (i) ═ ni/N。
Threshold value T5Dividing pixels in an original image into two groups according to the size of a pixel gray value: the gray value of the pixel in the original image is less than or equal to T5G3, the gray value of the pixel is greater than T5A fourth group G4 of all pixels.
Wherein the proportions of G3 and G4 are respectively:
wherein, ω is0(T5) Is a first ratio, ω1(T5) Is the second ratio.
According to a first ratio ω0(T5) And a second ratio omega1(T5) It can be determined that the mean of the pixel gray values of all pixels in G3 and G4 are:
wherein, mu0(T5) Is the first mean value, mu1(T5) Is the second mean value.
According to the first mean value mu0(T5) And a second mean value mu1(T5) A third mean value of the pixel gray values of all pixels in the original image may be determined, i.e. the total gray mean value in the original image is:
wherein, mu (T)5) Is the third mean value, i.e. the total mean value of the gray levels in the original image.
Based on the first ratio omega0(T5) A second ratio omega1(T5) First mean value mu0(T5) And a second mean value mu1(T5) Determining the interclass variance of the pixel gray values of all pixels in the original image as follows:
wherein,is the inter-group variance of the pixel gray values of all pixels in the original image.
According to a first ratio ω0(T5) And a first mean value mu0(T5) Determining the variance of the pixel gray values of all the pixels in the third group C3According to a second ratio ω1(T5) And a second mean value mu1(T5) Determining the variance of the pixel gray values of all the pixels in the fourth set C4WhereinAndrespectively, formula (26) and formula (27).
The sum of the variance within the group is found to be:
determining the optimal threshold value according to the threshold value, the variance between the groups and the sum of the variances in the groups, namely determining the optimal threshold value T according to a formula (25) and a formula (28)hIs expressed as formula (29):
k is a constant, and the value of K can be set according to needs.
The intra-group variance represents the cohesiveness of all pixels in each group, and the smaller the intra-group variance is, the better the cohesiveness of the pixel points in each group is.
And for the step S2, extracting the wing contour characteristic parameters of the cotton pests to be classified from the wing images, extracting the wing contour characteristic parameters from the wing images by using a Canny edge detection algorithm.
And S3, optimizing the wing image based on a mathematical morphology algorithm, and determining mathematical morphology characteristic parameters corresponding to the wings of the cotton pests to be classified based on the optimized wing image. This is because less feature selection will affect the subsequent classification and identification, and some similar species cannot be distinguished because of less extracted data. Therefore, in the embodiment of the invention, the mathematical morphology feature parameters in the original image are extracted as much as possible, then the mathematical features are optimized according to a plurality of mathematical morphology feature parameters, and the mathematical morphology feature parameters most suitable for classification and recognition are selected. Therefore, the quantity and the efficiency are considered, and the subsequent classification and identification are more accurate, so that the mathematical morphological characteristic parameters of the cotton pest wings need to be extracted in the embodiment of the invention to better identify and classify the cotton pests.
Before extracting mathematical morphological characteristic parameters of cotton pest wings, in order to solve the problem of wing edges and gaps, a mathematical morphological algorithm is needed to be adopted to optimize wing images, and the method specifically comprises the following steps: connecting the obtained gap and fracture of the edge of the insect wing by using a closing operation in a mathematical morphology algorithm, filling after the connection is finished, and performing wing segmentation and characteristic parameter extraction after the filling is finished to obtain an optimized wing image. The step of optimizing the wing image based on the mathematical morphology algorithm is explained in detail below.
After being processed by an Otsu threshold segmentation algorithm, the obtained wing image edge is outwards protruded and incomplete. Some tiny gaps and fractures existing in the wing image cannot be completely solved after the closing operation in the mathematical morphology, the details of the wing image are weakened after the filtering filling, and the filling effect is not beneficial to the subsequent mathematical feature extraction. Therefore, the embodiment of the invention judges whether the complete segmentation is completed by detecting whether the wing image contains tiny endpoints after completing various operations. If the fine end points are contained, the segmentation is not finished, the wing image segmentation operation is continued, and if the fine end points are not contained, the segmentation is finished, so that the segmentation has self-adaptability. Specifically, the above process can be implemented by performing the following operations on the wing image:
(1) closing operation: using a disc-shaped structural element with the radius of 3 to perform closed operation on the wing image after edge detection to obtain a set X1As shown in equation (30):
wherein, A is a set formed by each pixel in the wing image obtained after the edge detection, and b is a disc-shaped structural element with the radius of 3.
(2) Checking an endpoint: searching out all sets X containing endpoints in A2As shown in formula (31):
wherein B is a disc-shaped structural element with a radius R, and R can be set according to requirements.
(3) For burr, end growth: for set X2Using the complete dilation process, set A is used as a reduction factor, resulting in set X3As shown in equation (32). Wherein h is a constant, and a proper value can be selected according to the requirement.
(4) Edge improvement: edge improvement here means to improve the false edges of the growth to obtain the set X4As shown in formula (33).
X4=A-X3(33)
(5) Feedback operation: after the edge improvement is completed, the closed operation is performed on the improved edge by using a disk-shaped structure with the radius of 3, and the image after the closed operation is recorded as a set X5. Detection set X5Whether or not the inside isContaining endpoints, if containing an endpoint specification in the set X2The false edges are not sufficient after using the full dilation process. Should the pseudo-edge continue to grow, i.e. repeat step (3), if set X5And if no end point exists, the next operation is carried out.
(6) Finding complete false edges: because of the set X5The result is a complete closing operation by the disc-shaped structuring element with radius 3, so that the real edge can stick to the false edge, which severely distorts the real edge. In this case, the matrix a may be used as a reduction factor to find the pseudo edge that is closest to the true edge.
(7) And obtaining a final segmentation chart. Subtracting the found false edge set X from the filtered filled image3Is the desired real edge and then the closing operation, filling and filtering are performed. The final obtained wing image is smoother and the segmentation is more fine.
And (4) after the wing image is optimized by a mathematical morphology algorithm, extracting mathematical morphology characteristic parameters of the cotton pest wing. The mathematical morphology characteristic parameters mainly comprise geometric characteristic parameters and moment characteristic parameters, and the geometric characteristic parameters comprise: the length and width of the target region in the original image, the area of the target region, the contour perimeter of the target region, the shape parameter of the target region, the roundness of the target region and the eccentricity of the target region. Moment characteristic parameters include center of gravity, lobar character, circularity, and sphericity. The following are detailed below.
1) Geometric characteristics
(1) The length and width of the target area, i.e. the length of the transverse axis and the length of the longitudinal axis of the target area
The horizontal axis and the vertical axis of the cotton pest image to be classified correspond to the long axis and the short axis of the boundary in the common image. Wherein the long axis of the boundary is defined as: the length between the two points on the boundary that are farthest apart; the short axis definition of the boundary is: the length between two intersections of a line perpendicular to the major axis and the boundary, and the specific values calculated are generally referred to as the width and the length, respectively.
The data collection of the specimen is directional, so the horizontal axis and the vertical axis are directional, and the data collection of the horizontal axis and the vertical axis is determined according to the directionality. In the classification of insects, a specific classification index for classifying a certain type of houseworms is the wings, namely the wingspan, of the insects. It follows that the measure in terms of insect wingspread corresponds to the definition of the transverse axis of the insect, while the definition of the longitudinal axis is the maximum length in a direction perpendicular to the transverse axis.
(2) Area of target region
The area of the target region is the sum of the total number of boundary pixels of the target region and the total number of pixels within the boundary of the target region. In the embodiment of the invention, an area correction method is adopted to define the area, and the specific algorithm is as follows: the total number of all pixels located inside the boundary of the target area minus half the total number of pixels on the boundary plus one.
The formula is as follows:
in the above formula, S is the area after correction, S' is the area before correction, and N is the total number of pixels on the boundary.
(3) Contour perimeter of target area
The perimeter is simply the sum of the distances between all adjacent two pixels. Using different formulas for calculating the circumference results in different circumference values. The simplest way to define the perimeter is to calculate the number of pixels outside the target area. But some border areas may have 90 degree right angles present, resulting in larger values of the calculated circumference. Therefore, the embodiment of the invention improves the calculation method of the perimeter, and calculates the perimeter of the target area by using the boundary chain code. The circumference formula is as follows:
where P is the perimeter of the boundary of the target region, i.e. the perimeter of the contour of the target region, NeIs the number of even numbers, N, in the boundary chain code of the target region0Is the odd number in the boundary chain code of the target region.
(4) Shape parameters of a target area
The shape parameters are defined according to the relation between the perimeter and the area, the shape parameters have no concept of quantity, are insensitive to the shape size change scale of cotton pests, and are a good classification reference standard.
The shape parameter is defined as follows:
in the above formula, F is a shape parameter of the target region, P is a perimeter of the target region, and a is an area of the target region. The value of F is 1 when the target area is circular, and is greater than 1 when the target area is other shapes.
(5) Roundness-like of target region
Circularity refers to the degree of similarity between the target area and a complete circle, and is defined by the following equation:
in the above formula, R represents roundness, A represents a region area, and W represents a length of the horizontal axis. The magnitude of the R value reflects the degree of circularity of the target image.
(6) Eccentricity of target area
Eccentricity is the degree of compactness of the target area, and can roughly separate a relatively long and thin figure from a relatively square figure. The eccentricity calculation method comprises the following steps: the length of the horizontal axis W is divided by the length of the vertical axis L, specifically as follows:
in the above formula, E is the eccentricity, W is the length of the horizontal axis, and L is the length of the vertical axis.
2) Characteristic parameter of moment
(1) Center of gravity
The weight of a particular object in a digital image is also defined as:
in the above formulaThe abscissa representing the center of gravity,and F (x, y) represents the gray value of a pixel point with coordinates (x, y) in the image. In a binary image, F (x, y) takes a value of 1 or 0.
(2) Leafiness of the plant
The phyllotaxis reflects the amplitude characteristics of the target region boundary. Defined as the formula:
the above formula B is a lobate shape, R1 is the shortest distance from the center of gravity of the target region to the boundary, and W is the length of the abscissa axis.
(3) Circularity
Circularity is a feature quantity defined by all boundary points in the target region, and is defined by the following formula:
among them are:
n is the total number of pixels in the target area, (x)i,yi) Representing any pixel within the target area.
(4) Sphericity of ball
The sphericity refers to the ratio of the radius of an inscribed circle to the radius of an circumscribed circle in a target region, and when the target region is a complete circle, the maximum value of the sphericity is 1, and when the target region is other irregular shapes, the value of the sphericity is less than 1. The specific expression of sphericity is shown as follows:
wherein R is1Is the radius of the inscribed circle, R2Is the radius of the circumscribed circle.
On the basis of the above embodiment, the radial basis function neural network is constructed by:
processing Hu invariant moment parameters, overall contour characteristic parameters, wing contour characteristic parameters and mathematical morphology characteristic parameters corresponding to original images of a plurality of classified cotton pests based on a fuzzy clustering FCM algorithm, and generating radial basis functions based on the processed four types of parameters;
estimating parameters in the radial basis function based on the categories of the classified cotton pests, determining a connection weight between a hidden layer and an output layer in the radial basis function neural network based on a LASSO linear model, and constructing the radial basis function neural network.
Specifically, after Hu invariant moment parameters, integral contour characteristic parameters, wing contour characteristic parameters and mathematical morphology characteristic parameters corresponding to the original images of a plurality of classified cotton pests are determined, radial basis functions are generated, and parameters in the radial basis functions are estimated according to the categories of the plurality of classified cotton pests. And then learning the connection weight parameter between the hidden layer and the output layer in the RBF neural network. In order to reduce the complexity of the RBF neural network, LASSO punishment is integrated into weight parameter solving from the hidden layer to the output layer, so that nodes of the hidden layer are extracted, and the complexity of the neural network is reduced.
(1) Firstly, a LASSO linear regression method is adopted to select and compress variables
The linear regression problem model is:
y=w1x1…w2x2+…+wdxd(45)
wherein x1、x2、…、xdThe method is characterized in that d is a class of input variables, d is 4, and each x corresponds to one class of characteristic parameters, namely one class of four classes of characteristic parameters, namely Hu invariant moment parameters, overall contour characteristic parameters, wing contour characteristic parameters and mathematical morphology characteristic parameters. y is a response variable, w ═ w1、w2、…、wdIs the coefficient, i.e. the connection weight.
Suppose X ∈ Rn×dIs inputting dataMatrix, n is the dimension of the input variables. Each column is a candidate input variable, Y ∈ RnRepresenting a response variable vector. Given d input variables x1、x2、…、xdThen the response y is predicted as:
estimation using least squaresObtaining the formula of the LASSO algorithm by the least residual sum of squares and introducing an additional penalty term:
wherein,called the LASSO penalty, λ is a non-negative parameter. If the lambda is properly selected, the aim of variable sparseness is fulfilled.
(2) Second, the radial basis functions are constructed:
wherein x isi∈RnAs an input vector, wjRepresenting the connection weight from the hidden layer to the output layer, cj∈RnWhere j is 1,2, …, M is the center of the jth node of the hidden layer, y (x)i) Representing the output of the ith input neuron (i.e., the ith input vector) connected with the jth node of the hidden layer, and representing the radial basis function, deltaiIs a width value. In the embodiment of the present invention, a Gaussian function (Gaussian) is used as a radial basis function, and for convenience of theoretical analysis, the form of the Gaussian function can be expressed as:
(3) and thirdly: estimating a parameter δ within a radial basis functioniAnd cj
Wherein,the ith sample belongs to the membership degree of the jth class, and h is a scale parameter which belongs to manual adjustment.
The treatment process comprises the following steps: firstly, an input vector x is subjected to certain mapping from an input layer to an output layer, then an output value y is obtained, and the mapping relation can be simplified as follows:
y=f(x)=xgw (52)
wherein w is the connection weight wjVector of (a), xgIs the mapped value of the input vector x.
(4) And thirdly: learning of connection weights w
Solving the connection weight w from the hidden layer to the output layer by using a LASSO linear problem, wherein the formula is as follows:
where λ is a non-negative parameter.
The s-th element update rule for w is as follows:
loop iterations are used for this equation until w converges. Where sgn () is a sign function,is thatThe analytical solution of (2).
(5) And finally: determination of shrinkage parameters
In order to select the optimum parameter lambda, the method is proposed in the field lambdalbub]Performing an enumeration search within, whereinlbIs zero, and λubIs a value large enough to ensureAll values of (a) are equal to zero. And selecting a proper value according to the classification precision through a grid search and Cross Validation (CV) strategy.
After four types of characteristic parameters of the cotton pest image are extracted, the FCM algorithm is used for processing the original data, and the characteristic parameters are used as an input layer of the RBF neural network, so that classification of cotton pests is realized.
As shown in fig. 2, on the basis of the above embodiment, the embodiment of the present invention further provides a cotton pest identification and classification device, including: the device comprises a Hu invariant moment parameter acquisition module 21, a contour characteristic parameter acquisition module 22, a mathematical morphology characteristic parameter acquisition module 23 and a category determination module 24. Wherein,
the Hu invariant moment parameter acquisition module 21 is used for acquiring an original image containing cotton pests to be classified and acquiring Hu invariant moment parameters of the original image;
the contour characteristic parameter acquisition module 22 is configured to acquire an overall contour characteristic parameter of the cotton pest to be classified based on a Canny edge detection algorithm, acquire a wing image of the cotton pest to be classified based on an Otsu threshold segmentation algorithm and the Canny edge detection algorithm, and extract the wing contour characteristic parameter of the cotton pest to be classified from the wing image;
the mathematical morphological characteristic parameter acquisition module 23 is configured to optimize the wing image based on a mathematical morphological algorithm, and determine a mathematical morphological characteristic parameter corresponding to a wing of the cotton pest to be classified based on the optimized wing image;
the category determining module 24 is configured to process the Hu invariant moment parameter, the whole contour feature parameter, the wing contour feature parameter, and the mathematical morphology feature parameter based on a fuzzy clustering FCM algorithm, input the four processed parameters to a radial basis function neural network, and output a category to which the cotton pest to be classified belongs by the radial basis function neural network.
Specifically, the functions and operation flows of the modules in the embodiments of the present invention correspond to those in the embodiments of the method class one to one, and are not described herein again in the embodiments of the present invention.
On the basis of the above embodiment, the embodiment of the present invention further provides a cotton pest identification and classification device, including:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, which when invoked by the processor are capable of performing the method of fig. 1.
On the basis of the above embodiments, there is also provided in an embodiment of the present invention a non-transitory computer-readable storage medium, which is characterized by storing computer instructions that cause the computer to execute the method described in fig. 1.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A cotton pest identification and classification method is characterized by comprising the following steps:
s1, acquiring an original image containing cotton pests to be classified, and acquiring Hu invariant moment parameters of the original image;
s2, acquiring overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm, acquiring wing images of the cotton pests to be classified based on an Otsu threshold segmentation algorithm and the Canny edge detection algorithm, and extracting the wing contour characteristic parameters of the cotton pests to be classified from the wing images;
s3, optimizing the wing image based on a mathematical morphology algorithm, and determining mathematical morphology characteristic parameters corresponding to the wings of the cotton pests to be classified based on the optimized wing image;
s4, processing the Hu invariant moment parameter, the overall contour characteristic parameter, the wing contour characteristic parameter and the mathematical morphology characteristic parameter based on a fuzzy clustering FCM algorithm, inputting the four processed parameters into a radial basis function neural network, and outputting the category of the cotton pest to be classified by the radial basis function neural network.
2. The method according to claim 1, wherein the step of obtaining the overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm in the step S2 specifically comprises the following steps:
s201, respectively acquiring a first gradient amplitude component of a pixel gray value in a first direction and a second gradient amplitude component of a pixel gray value in a second direction in the original image based on a first-order difference operator, wherein the first direction and the second direction are perpendicular to each other;
s202, determining a gradient amplitude of each pixel and a gradient direction corresponding to each pixel in the original image based on the first gradient amplitude component and the second gradient amplitude component, wherein the pixel with the maximum gradient amplitude in the gradient direction is an edge pixel of the original image, and all edge pixels of the original image form an edge image;
s203, traversing each pixel in the edge image, and based on the magnitude relation between the gradient amplitude of each pixel and the gradient amplitudes of two adjacent pixels in the gradient direction corresponding to each pixel, refining the edge contour of the original image into a pixel width, and obtaining an edge contour image;
s204, based on a Canny edge detection algorithm, extracting the overall contour characteristic parameters of the edge contour image by adopting a self-adaptive threshold value, and taking the overall contour characteristic parameters of the edge contour image as the overall contour characteristic parameters of the cotton pests to be classified.
3. The method according to claim 2, wherein the adaptive threshold in S204 is obtained by:
s2041, segmenting the original image based on a threshold estimation value, and dividing pixels in the original image into a first group and a second group according to the size of a pixel gray value; the gray value of each pixel in the first group is greater than or equal to a preset value, and the gray value of each pixel in the second group is less than the preset value;
s2042, calculating a first average pixel gray value of all pixels in the first group and a second average pixel gray value of all pixels in the second group respectively, and calculating an average value of the first average pixel gray value and the second average pixel gray value;
s2043, if the difference between the preset value and the average value is smaller than the threshold estimation value, using the preset value as the adaptive threshold, otherwise, updating the preset value, and repeating S2041 and S2042 until the difference between the preset value and the average value is smaller than the threshold estimation value.
4. The method of claim 2, wherein S201 is preceded by:
removing pulse noise in the original image based on a switch type median filtering algorithm to obtain a noise-removed image;
and based on a Gaussian smoothing filter, performing fuzzification processing on the de-noised image to obtain a smooth image.
5. The method according to claim 2, wherein the obtaining of the wing image of the cotton pest to be classified based on Otsu threshold segmentation algorithm and Canny edge detection algorithm and the extracting of the wing contour feature parameters of the cotton pest to be classified from the wing image specifically comprise:
after acquiring the overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm, separating wing images of the cotton pests to be classified from images obtained after Canny edge detection algorithm processing by adopting an optimal threshold based on an Otsu threshold segmentation algorithm;
and then extracting wing contour characteristic parameters of the cotton pests to be classified in the wing images based on a Canny edge detection algorithm.
6. The method of claim 5, wherein the optimal threshold is obtained by:
dividing pixels in the original image into a third group and a fourth group according to the size of a pixel gray value; the gray value of each pixel in the third group is less than or equal to a threshold, and the gray value of each pixel in the fourth group is greater than the threshold;
respectively determining a first proportion of the pixel gray values of all the pixels in the third group and a second proportion of the pixel gray values of all the pixels in the fourth group based on the value probability of each pixel gray value;
respectively determining a first mean value of pixel gray values of all pixels in the third group and a second mean value of pixel gray values of all pixels in the fourth group based on the first proportion and the second proportion, and determining a third mean value of pixel gray values of all pixels in the original image based on the first mean value and the second mean value;
determining an inter-group variance and a sum of intra-group variances of pixel gray values of all pixels in the original image based on the first proportion, the second proportion, the first mean and the second mean;
determining the optimal threshold based on the threshold, the sum of the intra-group variances, and the inter-group variance.
7. The method of any one of claims 1-6, wherein the radial basis function neural network is constructed by:
processing Hu invariant moment parameters, overall contour characteristic parameters, wing contour characteristic parameters and mathematical morphology characteristic parameters corresponding to original images of a plurality of classified cotton pests based on a fuzzy clustering FCM algorithm, and generating radial basis functions based on the processed four types of parameters;
estimating parameters in the radial basis function based on the categories of the classified cotton pests, determining connection weight parameters between a hidden layer and an output layer in the radial basis function neural network based on an LASSO linear model, and constructing the radial basis function neural network.
8. A cotton pest discernment sorter characterized by, includes:
the Hu invariant moment parameter acquisition module is used for acquiring an original image containing cotton pests to be classified and acquiring Hu invariant moment parameters of the original image;
the contour characteristic parameter acquisition module is used for acquiring the overall contour characteristic parameters of the cotton pests to be classified based on a Canny edge detection algorithm, acquiring wing images of the cotton pests to be classified based on an Otsu threshold segmentation algorithm and the Canny edge detection algorithm, and extracting the wing contour characteristic parameters of the cotton pests to be classified from the wing images;
the mathematical morphological characteristic parameter acquisition module is used for optimizing the wing image based on a mathematical morphological algorithm and determining mathematical morphological characteristic parameters corresponding to the wings of the cotton pests to be classified based on the optimized wing image;
and the category determining module is used for processing the Hu invariant moment parameter, the overall contour characteristic parameter, the wing contour characteristic parameter and the mathematical morphology characteristic parameter based on a fuzzy clustering FCM algorithm, inputting the four processed parameters into a radial basis function neural network, and outputting the category of the cotton pests to be classified by the radial basis function neural network.
9. A cotton pest identification and classification device is characterized by comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any one of claims 1-7.
10. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1-7.
CN201810812487.1A 2018-07-23 2018-07-23 Cotton-plant pest-insects method for identifying and classifying and device Pending CN109102004A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810812487.1A CN109102004A (en) 2018-07-23 2018-07-23 Cotton-plant pest-insects method for identifying and classifying and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810812487.1A CN109102004A (en) 2018-07-23 2018-07-23 Cotton-plant pest-insects method for identifying and classifying and device

Publications (1)

Publication Number Publication Date
CN109102004A true CN109102004A (en) 2018-12-28

Family

ID=64847118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810812487.1A Pending CN109102004A (en) 2018-07-23 2018-07-23 Cotton-plant pest-insects method for identifying and classifying and device

Country Status (1)

Country Link
CN (1) CN109102004A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188641A (en) * 2019-05-20 2019-08-30 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110490861A (en) * 2019-08-22 2019-11-22 石河子大学 A kind of recognition methods and system of the aphid on yellow plate
CN112528726A (en) * 2020-10-14 2021-03-19 石河子大学 Aphis gossypii insect pest monitoring method and system based on spectral imaging and deep learning
CN112907651A (en) * 2021-03-29 2021-06-04 山东捷瑞数字科技股份有限公司 Measuring method of oyster external form based on semantic segmentation network
CN112926674A (en) * 2021-03-19 2021-06-08 广东好太太智能家居有限公司 Image classification prediction method and device based on support vector machine model
CN113761970A (en) * 2020-06-02 2021-12-07 苏州科瓴精密机械科技有限公司 Method, system, robot and storage medium for identifying working position based on image
CN114387185A (en) * 2022-01-12 2022-04-22 苏州海关综合技术中心 Fruit fly trapping analysis system and identification method
CN114998614A (en) * 2022-08-08 2022-09-02 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium
CN116135019A (en) * 2021-11-17 2023-05-19 中国联合网络通信集团有限公司 Pest control method, server, pest control device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930249A (en) * 2012-10-23 2013-02-13 四川农业大学 Method for identifying and counting farmland pests based on colors and models
CN103034872A (en) * 2012-12-26 2013-04-10 四川农业大学 Farmland pest recognition method based on colors and fuzzy clustering algorithm
CN103177266A (en) * 2013-04-07 2013-06-26 青岛科技大学 Intelligent stock pest identification system
CN104102920A (en) * 2014-07-15 2014-10-15 中国科学院合肥物质科学研究院 Pest image classification method and pest image classification system based on morphological multi-feature fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930249A (en) * 2012-10-23 2013-02-13 四川农业大学 Method for identifying and counting farmland pests based on colors and models
CN103034872A (en) * 2012-12-26 2013-04-10 四川农业大学 Farmland pest recognition method based on colors and fuzzy clustering algorithm
CN103177266A (en) * 2013-04-07 2013-06-26 青岛科技大学 Intelligent stock pest identification system
CN104102920A (en) * 2014-07-15 2014-10-15 中国科学院合肥物质科学研究院 Pest image classification method and pest image classification system based on morphological multi-feature fusion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
SARA PINO-POVEDANO 等: "Radial Basis Function Interpolation for Signal-Model-Independent Localization", 《IEEE SENSORS JOURNAL》 *
代亭: "粮食害虫智能检测及分类方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
徐亮 等: "基于Canny算子的图像边缘检测优化算法", 《科技通报》 *
杨文翰: "基于数字图像处理的棉花害虫识别体系研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈爱军 等著: "《数字图像处理及其MATLAB实现》", 31 July 2008, 东北林业大学出版社 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188641A (en) * 2019-05-20 2019-08-30 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system
CN110188641B (en) * 2019-05-20 2022-02-01 北京迈格威科技有限公司 Image recognition and neural network model training method, device and system
CN110490861A (en) * 2019-08-22 2019-11-22 石河子大学 A kind of recognition methods and system of the aphid on yellow plate
CN113761970A (en) * 2020-06-02 2021-12-07 苏州科瓴精密机械科技有限公司 Method, system, robot and storage medium for identifying working position based on image
CN113761970B (en) * 2020-06-02 2023-12-26 苏州科瓴精密机械科技有限公司 Method, system, robot and storage medium for identifying working position based on image
CN112528726B (en) * 2020-10-14 2022-05-13 石河子大学 Cotton aphid pest monitoring method and system based on spectral imaging and deep learning
CN112528726A (en) * 2020-10-14 2021-03-19 石河子大学 Aphis gossypii insect pest monitoring method and system based on spectral imaging and deep learning
CN112926674A (en) * 2021-03-19 2021-06-08 广东好太太智能家居有限公司 Image classification prediction method and device based on support vector machine model
CN112907651A (en) * 2021-03-29 2021-06-04 山东捷瑞数字科技股份有限公司 Measuring method of oyster external form based on semantic segmentation network
CN116135019A (en) * 2021-11-17 2023-05-19 中国联合网络通信集团有限公司 Pest control method, server, pest control device, and storage medium
CN114387185A (en) * 2022-01-12 2022-04-22 苏州海关综合技术中心 Fruit fly trapping analysis system and identification method
CN114998614A (en) * 2022-08-08 2022-09-02 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium
CN114998614B (en) * 2022-08-08 2023-01-24 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109102004A (en) Cotton-plant pest-insects method for identifying and classifying and device
Li et al. SAR image change detection using PCANet guided by saliency detection
Papari et al. Edge and line oriented contour detection: State of the art
US9971929B2 (en) Fingerprint classification system and method using regular expression machines
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
Deenan et al. Image segmentation algorithms for Banana leaf disease diagnosis
CN113436212A (en) Extraction method for inner contour of circuit breaker static contact meshing state image detection
CN112579823B (en) Video abstract generation method and system based on feature fusion and incremental sliding window
CN106127735B (en) A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN115731257A (en) Leaf form information extraction method based on image
CN109063716B (en) Image identification method, device, equipment and computer readable storage medium
CN109359653B (en) Cotton leaf adhesion lesion image segmentation method and system
CN116071339A (en) Product defect identification method based on improved whale algorithm optimization SVM
Fu et al. Genetic programming for edge detection: a Gaussian-based approach
Lyasheva et al. Application of image weight models to increase canny contour detector resilience to interference
CN112200789B (en) Image recognition method and device, electronic equipment and storage medium
CN109376782A (en) Support vector machines cataract stage division and device based on eye image feature
Taheri et al. Diagnosis of cardiovascular disease using fuzzy methods in nuclear medicine imaging
CN110197114B (en) Automatic identification method and device for single neuron axon synaptic junction in whole brain range
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
Narasimhamurthy et al. A Copy-Move Image Forgery Detection Using Modified SURF Features and AKAZE Detector.
Karma et al. Image segmentation based on color dissimilarity
CN115880696A (en) Internet of things card management method and device based on deep learning and related media
CN116309633A (en) Retina blood vessel segmentation method based on nuclear intuitionistic fuzzy C-means clustering
CN114677530A (en) Clustering algorithm effectiveness evaluation method, device and medium based on wavelet shape descriptor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Qu Haiping

Inventor after: Zhang Ying

Inventor before: Qu Haiping

Inventor before: Zhang Ying

Inventor before: Yue Jun

Inventor before: Li Zhenbo

Inventor before: Kou Guangjie

Inventor before: Zhang Zhiwang

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181228