CN117152609A - Crop appearance characteristic detecting system - Google Patents
Crop appearance characteristic detecting system Download PDFInfo
- Publication number
- CN117152609A CN117152609A CN202311092860.8A CN202311092860A CN117152609A CN 117152609 A CN117152609 A CN 117152609A CN 202311092860 A CN202311092860 A CN 202311092860A CN 117152609 A CN117152609 A CN 117152609A
- Authority
- CN
- China
- Prior art keywords
- image
- unit
- data
- crop
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012706 support-vector machine Methods 0.000 claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 28
- 230000006870 function Effects 0.000 claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 238000013136 deep learning model Methods 0.000 claims abstract description 10
- 238000011176 pooling Methods 0.000 claims abstract description 10
- 230000004913 activation Effects 0.000 claims abstract description 8
- 230000036541 health Effects 0.000 claims abstract description 6
- 238000010801 machine learning Methods 0.000 claims abstract description 6
- 238000013135 deep learning Methods 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims description 30
- 238000012545 processing Methods 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 25
- 235000016709 nutrition Nutrition 0.000 claims description 24
- 230000007812 deficiency Effects 0.000 claims description 21
- 230000035764 nutrition Effects 0.000 claims description 21
- 238000007405 data analysis Methods 0.000 claims description 14
- 238000013480 data collection Methods 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 claims description 7
- 238000005286 illumination Methods 0.000 claims description 7
- 239000002689 soil Substances 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 6
- 238000003973 irrigation Methods 0.000 claims description 4
- 230000002262 irrigation Effects 0.000 claims description 4
- 238000003058 natural language processing Methods 0.000 claims description 4
- 235000013619 trace mineral Nutrition 0.000 claims description 4
- 239000011573 trace mineral Substances 0.000 claims description 4
- 239000004615 ingredient Substances 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 9
- 238000003745 diagnosis Methods 0.000 abstract description 6
- 201000010099 disease Diseases 0.000 abstract description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 6
- 238000013528 artificial neural network Methods 0.000 abstract 1
- 240000008042 Zea mays Species 0.000 description 58
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 58
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 55
- 235000005822 corn Nutrition 0.000 description 55
- 230000011218 segmentation Effects 0.000 description 14
- 241000196324 Embryophyta Species 0.000 description 13
- 208000024891 symptom Diseases 0.000 description 8
- 230000008859 change Effects 0.000 description 4
- 230000002068 genetic effect Effects 0.000 description 4
- 241000607479 Yersinia pestis Species 0.000 description 3
- 235000016383 Zea mays subsp huehuetenangensis Nutrition 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 235000009973 maize Nutrition 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- IJGRMHOSHXDMSA-UHFFFAOYSA-N Atomic nitrogen Chemical compound N#N IJGRMHOSHXDMSA-UHFFFAOYSA-N 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000004720 fertilization Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 208000003643 Callosities Diseases 0.000 description 1
- 241000238631 Hexapoda Species 0.000 description 1
- 206010020649 Hyperkeratosis Diseases 0.000 description 1
- KGHNSNSWRMJVND-UHFFFAOYSA-N Hypocrellin Natural products COC1=CC(=O)C2=C3C4C(C(C(=O)C)C(C)(O)Cc5c(OC)c(O)c6C(=O)C=C(OC)C(=C13)c6c45)C(=C2O)OC KGHNSNSWRMJVND-UHFFFAOYSA-N 0.000 description 1
- 208000019025 Hypokalemia Diseases 0.000 description 1
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000012272 crop production Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- VANSZAOQCMTTPB-SETSBSEESA-N hypocrellin Chemical compound C1[C@@](C)(O)[C@@H](C(C)=O)C2=C(OC)C(O)=C3C(=O)C=C(OC)C4=C3C2=C2C3=C4C(OC)=CC(=O)C3=C(O)C(OC)=C21 VANSZAOQCMTTPB-SETSBSEESA-N 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012843 least square support vector machine Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 229910052757 nitrogen Inorganic materials 0.000 description 1
- 235000018343 nutrient deficiency Nutrition 0.000 description 1
- 235000015097 nutrients Nutrition 0.000 description 1
- 229910052698 phosphorus Inorganic materials 0.000 description 1
- 239000011574 phosphorus Substances 0.000 description 1
- 208000007645 potassium deficiency Diseases 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- BQJKVFXDDMQLBE-UHFFFAOYSA-N shiraiachrome A Natural products COC1=C2C3=C(OC)C=C(O)C4=C3C3=C5C(CC(C)(O)C(C(C)=O)C3=C(OC)C4=O)=C(OC)C(=O)C(C(O)=C1)=C25 BQJKVFXDDMQLBE-UHFFFAOYSA-N 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention collects the appearance image of the crop in real time through the camera array, and learns the characteristic of automatically extracting the plant image by using a deep learning method, such as a convolution layer, a pooling layer and an activation function in a Convolution Neural Network (CNN), after preprocessing the image. The characteristics of the plant image are extracted, and the health condition of the plant in the image can be classified and identified by using a machine learning algorithm or a deep learning model; the deep learning model commonly used in the scene is ResNet, inception, efficientNet, and the like, and a model is trained again, so that a large number of plant images are used for learning, and automatic detection and diagnosis of plant diseases can be realized. After the information is obtained, the invention can provide a personalized decision scheme by using the agricultural intelligent decision Web service based on the support vector machine according to the basic condition of the existing crops, thereby helping farmers to quickly find out the problematic crops in the planting process and quickly process the problematic crops according to the decision scheme provided by the system.
Description
Technical Field
The invention relates to a crop appearance characteristic detection system, and belongs to the technical field of agricultural artificial intelligence application.
Background
The appearance of crops can usually show the health condition, and in order to cultivate crops with higher quality, the growth state of the crops in the planting process, such as leaf color and form, the stout degree of branches and stems, the condition of flowers and fruits and the like, need to be closely concerned. Therefore, by identifying the appearance characteristics of the plants, the health problems of the plants can be found as early as possible, and targeted measures can be taken for protection and treatment.
At present, a farmer can accurately know soil characteristics, plant diseases and insect pests and the like through a remote sensing technology and sensor deployment in a farmland, and the like, and the production efficiency and the resource utilization efficiency are improved to a certain extent.
Disclosure of Invention
In order to solve the technical problems, the invention provides a crop appearance characteristic detection system which does not need to install a sensor on crops, has little influence on the growth of the crops and needs lower cost.
The invention is realized by the following technical scheme.
The invention provides a crop appearance feature detection system, which comprises an image processing and feature extraction module, a growth condition feedback module and a cultivation scheme recommendation module, wherein the image processing and feature extraction module comprises an image processing unit and a feature extraction unit, and the image processing unit is used for denoising an acquired crop image and adjusting the size; the feature extraction unit acquires key features from the image of the crop by training a deep learning model and using a green vegetation extraction algorithm;
the growth condition feedback module comprises a growth condition model training unit, a growth condition judging unit and a nutrition deficiency component feedback unit, wherein the growth condition model training unit is used for training a model capable of identifying the growth condition by using data with labels after analysis based on a machine learning algorithm; the growth condition judging unit judges the growth condition of crops by inputting the collected crop image data and the characteristic data based on the trained model; the nutrition missing component feedback unit accurately judges which nutrition components the crops lack by inputting the image data and the characteristic data of the crops based on the support vector machine model;
the cultivation scheme recommending module comprises a cultivation scheme generating unit, and the cultivation scheme generating unit generates a corresponding cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine according to the growth condition and the nutrition deficiency condition, wherein the cultivation scheme comprises soil trace element adjustment, illumination adjustment and irrigation frequency adjustment.
The system also comprises a data collection module, a data analysis module and a user feedback and service module, wherein:
the data collection module comprises an image collection unit and a database unit, wherein the image collection unit is used for collecting image data of plants; the database unit is used for storing the collected image data;
the data analysis module comprises a data analysis unit which analyzes the collected image data and characteristic data;
the user feedback and service module comprises a user uploading photo unit, a user feedback unit and a GPT unit, wherein the user uploading photo unit is that the user uploads crop image information, the system automatically analyzes the image and gives the user feedback of the crop on growth conditions and nutritional ingredients, and gives a cultivation scheme; the user feedback unit is a platform for user to communicate with developer opinion and is used for collecting feedback after user use and improving and updating the system; the GPT unit provides personalized knowledge and services related to crops for users by using natural language processing technology.
After preprocessing the image, the features of the plant image are extracted using a deep learning method.
The deep learning method comprises a convolution layer, a pooling layer and an activation function in the convolutional neural network CNN.
Machine learning algorithms or deep learning models are used to classify and identify the health of plants in the images.
The deep learning model is one of ResNet, inception, efficientNet.
The green vegetation extraction algorithm is one of an Excessgreen image, a CIVE, an AP-HI and a multi-threshold image segmentation method.
And collecting the appearance image of the crops in real time through a camera array.
The invention has the beneficial effects that: by analyzing the image data, the health condition of crops can be more comprehensively known, a current cultivation scheme most suitable for the crops is given according to the condition, and the quality of the crops is improved; the function of the system can be expanded according to the image information, and the system has good expansibility.
Drawings
FIG. 1 is a block diagram of a system of the present invention;
fig. 2 is a flow chart of the present invention for training a res net model.
Detailed Description
The technical solution of the present invention is further described below, but the scope of the claimed invention is not limited to the above.
Example 1
As shown in fig. 1, the data collection module is used for collecting storage data, the image processing and feature extraction module is used for processing images and extracting feature data in the images, the images can be processed according to a unified standard and the feature data in the images can be extracted, the data analysis module is used for analyzing and processing the extracted data, the growth condition feedback module is used for feeding back the current health condition of crops, the cultivation scheme recommendation module is used for giving an individualized cultivation scheme according to the current condition of the crops, and the user feedback and service module is used for collecting feedback information of users and providing services in the aspect of crop cultivation.
Specifically, the data collection module is used for collecting image data of plants and related characteristic condition data.
Specifically, the image processing and feature extraction module is used for denoising the collected images, adjusting the size and the like, and guaranteeing the consistency of the image quality. And meanwhile, key features of plant images, such as leaf lines, shapes, colors and the like, can be extracted by using a deep learning model and a green vegetation extraction algorithm.
Specifically, the data analysis module is used for analyzing the collected image data and the characteristic data.
Specifically, the growth condition feedback module is used for judging the growth condition and the nutrition deficiency condition of the plants. The core unit in the module is a nutrition deficiency component feedback unit, and the unit judges the plant deficiency condition by using a crop deficiency symptom diagnosis method based on a support vector machine, so that the accuracy is high.
Specifically, the cultivation scheme recommendation module recommends a cultivation scheme to be implemented at present according to the growth condition of plants.
Specifically, the user feedback and service module provides a feedback platform for the user, and the system is properly improved according to the feedback condition of the user. And the GPT is integrated in the module so that the user can learn the basic knowledge of crops more conveniently and provide better services.
Example 2
Preferably, the data collection module includes:
an image acquisition unit: the image acquisition unit acquires an image of the plant through the camera array.
Database unit: the database unit is used for storing the collected image data.
Preferably, the image processing and feature extraction module includes:
an image processing unit: the image processing unit performs functions of denoising, adjusting the size and the like on the acquired crop image so as to ensure consistency of image quality.
Feature extraction unit: the feature extraction unit is used for acquiring key features from the image of the crop by training a deep learning model and using a green vegetation extraction algorithm. A CNN architecture widely used in the field of computer vision is used in this unit. Such as ResNet, inception, efficientNet, etc. Green vegetation extraction algorithms that can be used are ExcessGreen (ExG), CIVE, AP-HI algorithm, multi-threshold image segmentation methods, and the like.
The convolution layer, pooling layer, and activation function in these architectures may perform feature extraction on the image. Wherein the convolution layer uses a convolution kernel to extract local features of the image. The convolution kernel slides on the input image, and the local area of each position is multiplied by the convolution kernel, and then the results are added to obtain a new eigenvalue. This process is actually detecting edges, textures and other local features in the input image. The pooling layer is used to reduce the size of the feature map, thereby reducing computational complexity and extracting the most important features. Maximum pooling is a common pooling approach that selects the largest feature value within each region as the pooling result for that region, so that the most significant features can be preserved. The activation function introduces nonlinearities into the CNN, allowing the network to learn the nonlinearity characteristics.
Green vegetation extraction is by analyzing color features in an image, typically operating in different color spaces (e.g., RGB, HSV, etc.) to identify and separate plant parts in the image. Among them, one of the most commonly used color spaces is the RGB color space. In this process, we calculate the frequency of color occurrence in different areas by histogram statistics based on the frequency of green pixels in the image, so as to effectively extract the green vegetation area.
Preferably, the data analysis module includes:
a data analysis unit: the data analysis unit is responsible for analyzing the collected image data and the feature data.
Preferably, the growth condition feedback module includes:
growth condition model training unit: the growth condition model training unit is based on a machine learning algorithm and uses the data with labels after analysis to train a model capable of identifying growth conditions.
Growth condition judging unit: the growth condition judging unit judges the growth condition of the crops by inputting the collected image data and characteristic data of the crops based on the trained model.
Nutrition deficiency component feedback unit: in the nutrition missing component feedback unit, a support vector machine model is used for accurately judging which nutrition components are missing by inputting image data and characteristic data of crops. The input vector of the support vector machine model is the green vegetation characteristics extracted by the characteristic extraction unit in the data collection module, and the characteristics are obtained by a color characteristic extraction algorithm. By taking these features as inputs, the support vector machine can achieve accurate diagnosis of crop pixel deficiency symptoms in a high-dimensional feature space. This process can be seen as a nonlinear mapping of the input vector of the least squares support vector machine to a high dimensional space to solve the convex optimization problem. The specific expression is as follows:
wherein ω is a weight, c is a penalty factor, ε is a prediction error, x is an input, y is an output, and b is a constant. Introduction of Lagrange multiplier alpha i And selects a radial basis function K (x i ,y i ) The decision function of the least squares support vector machine can be determined as:
wherein x is j Is the center of the gaussian kernel, and σ is the width of the gaussian kernel.
Preferably, the cultivation scheme recommendation module includes:
a culture scheme generation unit: the cultivation scheme generating unit generates a corresponding cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine according to the growth condition and the nutrition deficiency condition, wherein the cultivation scheme comprises soil trace element adjustment, illumination adjustment, irrigation frequency adjustment and the like.
In the intelligent decision Web service, the LIB support vector machine is used as a universal support vector machine software package, and the intelligent decision Web service based on the support vector machine is realized in the J2EE platform. An intelligent decision system, to be implemented with a support vector machine of the radial basis function (RBF, radial Basis Function) type, is provided to the user in the form of a Web service, the function being:the resulting support vector machine is a radial basis function classifier. The user can obtain personalized decision results only by sending the current request to the remote service through the service. Meanwhile, the adoption of the Web service providing mode also enables the system to be more open, and is easier to maintain and update.
Preferably, the user feedback and service module includes:
the user uploads a photo unit: the user uploading photo unit is that the user uploads the crop image information by himself, the system automatically analyzes the image and gives the user feedback on the growth condition and the nutrition composition of the crop, and gives a cultivation scheme.
User feedback unit: the user feedback unit is a platform for users to communicate with developers' opinion. Feedback after the user uses can be collected, and improvement and update of the system in the future are facilitated.
GPT unit: the GPT unit provides personalized knowledge and services related to crops for users by using natural language processing technology. The system can understand the problems of users, provide detailed information and suggestions for the users in aspects of crop growth, disease control, planting skills and the like, generate personalized crop management suggestions based on user data, and realize data-driven decision support. In addition, the GPT unit also actively collects user feedback to continuously improve services, so that the GPT unit meets user requirements, and provides better agricultural support for farmers and crop growers.
Example 3
Crop of corn is exemplified.
The data collection module comprises an image collection unit and a database unit, wherein the image collection unit is responsible for data collection and preparation, and a camera array is used for collecting a large amount of corn image data; meanwhile, acquiring related characteristic condition data such as the shape, color, surface glossiness and the like of the corn, and recording the characteristic condition data of the corn; the database unit is responsible for storing these collected data in a database.
The image processing and feature extraction module comprises an image processing unit and a feature extraction unit, wherein the image processing unit is responsible for preprocessing the collected corn images according to a unified standard, and comprises the operations of denoising, cutting, resizing and the like on the images so as to improve the consistency of the images.
The feature processing unit is responsible for extracting features in the image, and adopts a ResNet model, and the used green vegetation extraction algorithm is a multi-threshold image segmentation method.
Further, the structure of the ResNet model used is shown in FIG. 2, using the corn images collected by the data collection module, which are 224x224 in size and have 3 channels, these images are taken as inputs to the ResNet model. The image passes through a convolution layer using a convolution kernel of size 7x7, step size 2, and padding of 3 pixels.
Specifically, the convolution operation converts the image into a feature map with more channels, and in order to accelerate the training process and enhance the robustness of the model, the feature map is normalized through a batch normalization layer; after the processing is finished, an activation function ReLU is used for applying to the feature map, all negative values are set to be zero, and the nonlinear expression capacity of the network is increased; the feature map is then reduced in size by half and the number of parameters of the feature map is reduced by the max pooling layer.
Further, the residual block is a basic unit of ResNet, firstly, a feature map passes through a convolution layer, and a convolution kernel with the size of 3x3 and filling of 1 pixel are used to obtain a new feature map; then processing through a batch normalization layer and a ReLU activation function; then, a convolution layer is passed, and a batch normalization layer and a ReLU activation function are passed; finally, the shorted connections (i.e., the input signature) are added to the convolved signature and the function is activated again by the ReLU.
Further, after the last residual block is processed, the feature map is subjected to dimension reduction through a global average pooling layer; and finally, mapping the feature map after the dimension reduction into a final feature vector through a full connection layer.
Based on the color characteristics of the corn in different growth stages, the corn is firstly segmented from the seedling emergence stage to the early mature stage by adopting the following method:
G-BT1&&G-RT2
g, B, R are the three channels of green, blue and red of the image, and T1 and T2 are the two segmentation thresholds. By statistical analysis and experimentation on corn images, t1=t2=6 was used here.
In the first step of corn segmentation, a segment deficiency occurs, especially in both cases: in intense sunlight, some parts of the corn leaf will appear bright in color due to reflection of light, appearing light white in the image. Many segmentation algorithms have difficulty effectively segmenting the corn leaf in this case. To address this segmentation defect, we can further refine by the following equation:
T3GT4&8&B
through experiments, two segmentation thresholds, namely T3 and T4, were used, set to 250 and 200, respectively. This improved approach works well in solving the problem of the loss of segmentation of the highlight white portions. However, this approach may also introduce noise generated by white items (e.g., white paper). To solve this problem, noise characteristics were analyzed, and the following formula was introduced to filter out noise introduced through the foregoing formula:
G>210&&G-R<5&&G-B<5&&abs(R-B)<4
in darker weather conditions, such as in overcast and rainy days, the color of the corn leaf may appear light green or grayish green, which is also one of the reasons that many algorithms cannot complete the segmentation. To address this segmentation deficiency, the segmentation is performed by statistically analyzing the missing maize leaf features using the following formula:
G-RT9&&abs(G-B)T10
t5 and T6 are two segmentation thresholds, with t9=15 and t10=5 experimentally.
The multi-threshold image segmentation method improves the segmentation results of three types of images compared with the traditional ExG, CIVE and HI methods. The improvement is that the multi-threshold segmentation method is correspondingly adjusted to the numerical value change of the image pixels when the multi-threshold segmentation method is used for coping with challenges under different illumination conditions, especially when the illumination conditions are greatly changed, so as to adapt to the change of the color component values of the image caused by the illumination change. The multi-threshold image segmentation method can provide more accurate segmentation results compared with the traditional color feature extraction method under the condition of obvious illumination change.
Inputting the preprocessed corn image into a network through forward propagation, and automatically extracting local features such as shape, color and the like of the corn image through the network through the learning of a plurality of convolution layers; and then, the feature extraction unit fuses the feature graphs of different convolution layers, and the feature graphs are realized through modes of connection, addition, using an attention mechanism and the like, so that the abstraction and expression capacity of the model on the corn image is improved, and the fused features are input into a full connection layer for classifying and identifying the corn image.
In the optimization and training stage, network parameters are updated through a back propagation algorithm, so that the model can better extract key features of the corn image and accurately classify and identify the corn image. And finally, evaluating the trained model by using a verification set, and calculating indexes such as accuracy, precision, recall rate and the like to evaluate the performance of the model, so as to ensure that the extracted corn image features have better performance and generalization capability. In the whole process, the image processing unit and the feature extraction unit cooperate with each other to realize the functions of automatically learning and extracting key features of the corn image.
The data analysis module comprises a data analysis unit which is responsible for data analysis and model training, and the relevant characteristics of the corn growth condition and nutrition deficiency are known through analysis operation on the image data and the characteristic data; after analysis of the relevant data, a model is trained using these data to analyze maize growth and nutrient loss. The basic working thought of the model is as follows: judging the growth condition of the corn according to the shape, color, size and other information of the image of the corn; and judging the nutritional ingredient deficiency information according to the color and size data of the corn. Here the training of the model is performed using the friet Dataset. The data are divided into a training set, a verification set and a test set, the data are used for training and optimizing the model, and the accuracy and generalization capability of the model are improved through iterative training and adjusting of model parameters.
The growth condition feedback module is most important to a growth condition judging unit and a nutrition deficiency component feedback unit, wherein the growth condition judging unit judges the growth condition of corn, such as normal growth condition, growth limitation and the like, by inputting the corn image and characteristic data collected by using the camera array.
The nutrition deficiency component feedback unit is a core unit of the growth condition feedback module, and under the natural environment, a total of 40 corn leaf surface images are collected, wherein 20 corn leaf surface images are used as training samples, and the other 20 corn leaf surface images are used as diagnosis samples. These samples covered four different symptoms of corn deficiency, including normal status, nitrogen deficiency, potassium deficiency, and phosphorus deficiency.
In order to establish an accurate corn hypoid symptom diagnosis model, a binary coded genetic algorithm is adopted to determine the least square optimal combination parameters. Specifically, a population of 100 individuals is initialized and the number of iterations of the genetic algorithm for 100 generations is set. In the operation of the genetic algorithm, a crossover probability of 0.8 and a mutation probability of 0.2 are used. Limiting the parameters c and σ to the range [ 2] -5 ,2 10 ]And 10 bits of binary code are performed on the same. The goal of this genetic algorithm is to find the optimal combination of parameters to minimize the error in the corn's pixel-deficiency symptom image. Six color characteristic factors are extracted from 20 actual corn hypocrellin symptom images, so that a least square support vector machine model can be trained. Using this model, a further 20 corn images of unknown symptoms were diagnosed. The results show that the error between the calculated output value and the desired output value is very small. The method is a rapid and effective corn nutrition symptom classification diagnosis model, and can be used for analyzing the rest corn image data so as to obtain the corn nutrient deficiency condition.
The cultivation scheme recommendation module comprises a cultivation scheme generation unit which generates an optimal cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine based on the growth condition judgment, the nutrition deficiency detection result and the environmental condition. This includes soil trace element adjustment advice, fertilisation advice, irrigation frequency and intensity etc. The recommendation strategy may be based on past corn data and expertise.
The LIB support vector machine is used as a tool for realizing a support vector machine (support vector machine) algorithm, and is a universal support vector machine software package which is easy to use, simple to operate, quick and efficient.
Taking the classification problem of different decision schemes required by corns with different growth conditions as an example, the support vector machine algorithm uses radial basis functions as kernel functions, and the number and the center of the radial basis functions are automatically determined by the algorithm.
The general procedure for constructing a support vector machine model for RBF types based on LIB support vector machines is as follows:
1) The data is prepared according to the format required by the LIB support vector machine software package, and the formats of the training data and the test data are as follows: [1abel ] [ index1]: [ value1] [ index2]: value2 … …;
2) Carrying out necessary preprocessing on the data;
3) Selecting a radial basis function as a kernel function;
4) Selecting optimal parameters by cross-validation;
5) And training the whole training set by utilizing the optimal parameters to obtain a support vector machine model.
After the support vector machine model is obtained, the support vector machine prediction function in the LIB support vector machine can be utilized to classify or predict the corn growth condition.
Further, a support vector machine model of RBF (radial basis function) type constructed based on the LIB support vector machine is packaged to provide intelligent decision Web service. In this way, the user can make personalized decisions using this model by accessing the Web service.
Preferably, the user feedback and service module comprises a user uploading photo unit, a user feedback unit and a GPT unit, wherein the module designs a user friendly interface, and in the user uploading photo unit, the user is allowed to upload corn images in non-shooting time of the non-camera array and obtain a real-time feedback and recommended cultivation scheme. Secondly, the user interface should be clear, providing guidance and explanation of the user operation; the user feedback unit is used for collecting feedback and evaluation of the user so as to improve the accuracy and user experience of the system, and the feedback and evaluation are realized through user investigation and feedback tables.
Specifically, in the GPT unit, a trained GPT model needs to be integrated into the module. Firstly, collecting large-scale text data related to corn planting and agriculture, including information such as growth conditions, pest control, fertilization skills and the like; then, selecting a GPT model architecture suitable for natural language processing tasks; then, loading a pre-trained GPT model, wherein the model is subjected to universality training on large-scale text data; subsequently, tasks and related tag data, e.g., user questions, are defined as input, and related corn knowledge as output.
Further, a labeled dataset is collected, wherein the expert provides the correct output label for each input; next, the model is adapted to the specific agricultural domain by fine tuning to enable understanding of the language and knowledge associated with corn planting, during which parameters and super-parameters of the model are adjusted to optimize performance.
Further, the data is divided into small batches and the model weights are updated using back propagation and gradient descent to minimize the model prediction versus label gap. After the trimming is completed, the performance of the model is evaluated using a separate validation set. Finally, the best performing model is selected and used in actual practice to answer user queries about corn, continually monitor and refine the model to ensure that it provides accurate and useful information in a changing agricultural environment. This process requires a lot of data, computational resources and domain expertise to ensure the validity and reliability of the model.
In particular, once the GPT is integrated into the user feedback module, basic knowledge about corn and better services can be provided. Allowing the user to make questions or requests about corn on the designed interface, the user can enter questions on the interface, such as "how best to cultivate corn? "or" how do it detect diseases on maize leaves? Once the user has posed a problem, the GPT model begins processing these inputs and generating information related to the corn. For example, if a user inquires about how to cultivate corn, the GPT may generate text explaining the growth cycle of corn, optimal planting conditions (such as soil type, temperature and humidity requirements), fertilization recommendations, and pest control methods. Such information may include how to detect and treat diseases on corn leaves, thereby providing comprehensive guidance. Furthermore, the system may also generate personalized corn planting suggestions based on feedback information provided by the user, such as their geographic location, soil quality, and planting history, to help them optimize crop production. For example, if a user reports a particular problem on corn leaves, the system may provide particular disease identification and control recommendations. In order to protect the data privacy of the user, we take data encryption and anonymization measures and obey the relevant privacy regulations. The system also continuously monitors the user's feedback and uses the feedback to improve the accuracy and responsiveness of the GPT model. If the user-posed problem is not satisfied, the system records the queries to refine the training data for the model.
In summary, the camera array is used for collecting the images of the crops, the sensor is not required to be arranged on the crops, the number of cameras in the array is not large, the distance between the cameras and the crops is long, and the influence on the growth of the crops is small; through the analysis of the images, the current state of the crops can be known more comprehensively, the accurate adjustment of the nutrition condition is facilitated, and the high-quality growth of the crops is promoted. With the progress of technology, the shooting range of the camera is gradually increased, fewer cameras are required, the technology of image analysis and processing is mature, and the cost for using the system is lower. The system has good expansibility, and can be expanded at any time if more functional requirements based on image processing exist in the future. The GPT unit in the user feedback and service module provides convenient and personalized crop knowledge and service, helps farmers solve problems and improves production efficiency through natural language understanding technology, and meanwhile continuously collects user feedback to continuously improve, so that the GPT unit is an important intelligent support tool in the agricultural field and promotes sustainable development of modern agriculture.
Claims (8)
1. The utility model provides a crops appearance characteristic detecting system, includes image processing and characteristic extraction module, growth condition feedback module, cultivates scheme recommendation module, its characterized in that:
the image processing and feature extraction module comprises an image processing unit and a feature extraction unit, and the image processing unit is used for denoising the obtained crop image and adjusting the size; the feature extraction unit acquires key features from the image of the crop by training a deep learning model and using a green vegetation extraction algorithm;
the growth condition feedback module comprises a growth condition model training unit, a growth condition judging unit and a nutrition deficiency component feedback unit, wherein the growth condition model training unit is used for training a model capable of identifying the growth condition by using data with labels after analysis based on a machine learning algorithm; the growth condition judging unit judges the growth condition of crops by inputting the collected crop image data and the characteristic data based on the trained model; the nutrition missing component feedback unit accurately judges which nutrition components the crops lack by inputting the image data and the characteristic data of the crops based on the support vector machine model;
the cultivation scheme recommending module comprises a cultivation scheme generating unit, and the cultivation scheme generating unit generates a corresponding cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine according to the growth condition and the nutrition deficiency condition, wherein the cultivation scheme comprises soil trace element adjustment, illumination adjustment and irrigation frequency adjustment.
2. The crop profile feature detection system of claim 1, wherein: the system also comprises a data collection module, a data analysis module and a user feedback and service module, wherein:
the data collection module comprises an image collection unit and a database unit, wherein the image collection unit is used for collecting image data of plants; the database unit is used for storing the collected image data;
the data analysis module comprises a data analysis unit which analyzes the collected image data and characteristic data;
the user feedback and service module comprises a user uploading photo unit, a user feedback unit and a GPT unit, wherein the user uploading photo unit is that the user uploads crop image information, the system automatically analyzes the image and gives the user feedback of the crop on growth conditions and nutritional ingredients, and gives a cultivation scheme; the user feedback unit is a platform for user to communicate with developer opinion and is used for collecting feedback after user use and improving and updating the system; the GPT unit provides personalized knowledge and services related to crops for users by using natural language processing technology.
3. The crop profile feature detection system of claim 1, wherein: after preprocessing the image, the features of the plant image are extracted using a deep learning method.
4. A crop profile inspection system as claimed in claim 3, wherein: the deep learning method comprises a convolution layer, a pooling layer and an activation function in the convolutional neural network CNN.
5. The crop profile feature detection system of claim 1, wherein: machine learning algorithms or deep learning models are used to classify and identify the health of plants in the images.
6. The crop profile feature detection system of claim 5, wherein: the deep learning model is one of ResNet, inception, efficientNet.
7. The crop profile feature detection system of claim 1, wherein: the green vegetation extraction algorithm is one of an Excessgreen image, a CIVE, an AP-HI and a multi-threshold image segmentation method.
8. The crop profile feature detection system of claim 1, wherein: and collecting the appearance image of the crops in real time through a camera array.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311092860.8A CN117152609A (en) | 2023-08-28 | 2023-08-28 | Crop appearance characteristic detecting system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311092860.8A CN117152609A (en) | 2023-08-28 | 2023-08-28 | Crop appearance characteristic detecting system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117152609A true CN117152609A (en) | 2023-12-01 |
Family
ID=88886080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311092860.8A Pending CN117152609A (en) | 2023-08-28 | 2023-08-28 | Crop appearance characteristic detecting system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117152609A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117707091A (en) * | 2023-12-25 | 2024-03-15 | 盐城中科高通量计算研究院有限公司 | Agricultural straw processing quality control system based on image processing |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063686A (en) * | 2014-06-17 | 2014-09-24 | 中国科学院合肥物质科学研究院 | System and method for performing interactive diagnosis on crop leaf segment disease images |
CN107222682A (en) * | 2017-07-11 | 2017-09-29 | 西南大学 | Crop growth state testing method and device |
CN110347127A (en) * | 2019-06-26 | 2019-10-18 | 北京农业智能装备技术研究中心 | Crop planting mandatory system and method based on cloud service |
CN113221723A (en) * | 2021-05-08 | 2021-08-06 | 余治梅 | Traceable self-feedback learning urban plant factory |
CN116524279A (en) * | 2023-05-19 | 2023-08-01 | 广西科技师范学院 | Artificial intelligent image recognition crop growth condition analysis method for digital agriculture |
CN116645232A (en) * | 2023-06-10 | 2023-08-25 | 海南玻色科技有限公司 | Intelligent management system for agricultural cultivation |
-
2023
- 2023-08-28 CN CN202311092860.8A patent/CN117152609A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063686A (en) * | 2014-06-17 | 2014-09-24 | 中国科学院合肥物质科学研究院 | System and method for performing interactive diagnosis on crop leaf segment disease images |
CN107222682A (en) * | 2017-07-11 | 2017-09-29 | 西南大学 | Crop growth state testing method and device |
CN110347127A (en) * | 2019-06-26 | 2019-10-18 | 北京农业智能装备技术研究中心 | Crop planting mandatory system and method based on cloud service |
CN113221723A (en) * | 2021-05-08 | 2021-08-06 | 余治梅 | Traceable self-feedback learning urban plant factory |
CN116524279A (en) * | 2023-05-19 | 2023-08-01 | 广西科技师范学院 | Artificial intelligent image recognition crop growth condition analysis method for digital agriculture |
CN116645232A (en) * | 2023-06-10 | 2023-08-25 | 海南玻色科技有限公司 | Intelligent management system for agricultural cultivation |
Non-Patent Citations (1)
Title |
---|
赵春江: ""农业知识智能服务技术综述"", 智慧农业》, vol. 50, no. 2, 30 June 2023 (2023-06-30), pages 1 - 17 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117707091A (en) * | 2023-12-25 | 2024-03-15 | 盐城中科高通量计算研究院有限公司 | Agricultural straw processing quality control system based on image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Abdullahi et al. | Convolution neural network in precision agriculture for plant image recognition and classification | |
Zhou et al. | Wheat ears counting in field conditions based on multi-feature optimization and TWSVM | |
Mishra et al. | A Deep Learning-Based Novel Approach for Weed Growth Estimation. | |
Ninomiya | High-throughput field crop phenotyping: current status and challenges | |
CN113657158B (en) | Google EARTH ENGINE-based large-scale soybean planting area extraction algorithm | |
CN111553240A (en) | Corn disease condition grading method and system and computer equipment | |
Lamba et al. | Optimized classification model for plant diseases using generative adversarial networks | |
Manohar et al. | Image processing system based identification and classification of leaf disease: A case study on paddy leaf | |
Paymode et al. | Tomato leaf disease detection and classification using convolution neural network | |
CN106846334A (en) | Field corn plant recognition methods based on Support Vector data description | |
Bhuyar et al. | Crop classification with multi-temporal satellite image data | |
Bilal et al. | Increasing crop quality and yield with a machine learning-based crop monitoring system | |
Balasubramaniyan et al. | Color contour texture based peanut classification using deep spread spectral features classification model for assortment identification | |
CN117152609A (en) | Crop appearance characteristic detecting system | |
CN115861686A (en) | Litchi key growth period identification and detection method and system based on edge deep learning | |
Chaudhari et al. | Detection and Classification of Banana Leaf Disease Using Novel Segmentation and Ensemble Machine Learning Approach | |
Valente et al. | Fast classification of large germinated fields via high-resolution UAV imagery | |
Calma et al. | Cassava Disease Detection using MobileNetV3 Algorithm through Augmented Stem and Leaf Images | |
Chauhan et al. | Deep residual neural network for plant seedling image classification | |
Terzi et al. | Automatic detection of grape varieties with the newly proposed CNN model using ampelographic characteristics | |
CN114663652A (en) | Image processing method, image processing apparatus, management system, electronic device, and storage medium | |
Mahilraj et al. | Detection of Tomato leaf diseases using Attention Embedded Hyper-parameter Learning Optimization in CNN | |
Ramasamy et al. | Classification of Nutrient Deficiencies in Plants Using Recurrent Neural Network | |
CN118072251B (en) | Tobacco pest identification method, medium and system | |
CN118172676B (en) | Farmland pest detection method based on quantum deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |