[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117152609A - Crop appearance characteristic detecting system - Google Patents

Crop appearance characteristic detecting system Download PDF

Info

Publication number
CN117152609A
CN117152609A CN202311092860.8A CN202311092860A CN117152609A CN 117152609 A CN117152609 A CN 117152609A CN 202311092860 A CN202311092860 A CN 202311092860A CN 117152609 A CN117152609 A CN 117152609A
Authority
CN
China
Prior art keywords
image
unit
data
crop
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311092860.8A
Other languages
Chinese (zh)
Inventor
张峰
蒋明
刘章珩
何新磊
卢垚
邹陵
陶星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Academy Of Mathematical Sciences Technology Co ltd
Original Assignee
Guangxi Academy Of Mathematical Sciences Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Academy Of Mathematical Sciences Technology Co ltd filed Critical Guangxi Academy Of Mathematical Sciences Technology Co ltd
Priority to CN202311092860.8A priority Critical patent/CN117152609A/en
Publication of CN117152609A publication Critical patent/CN117152609A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention collects the appearance image of the crop in real time through the camera array, and learns the characteristic of automatically extracting the plant image by using a deep learning method, such as a convolution layer, a pooling layer and an activation function in a Convolution Neural Network (CNN), after preprocessing the image. The characteristics of the plant image are extracted, and the health condition of the plant in the image can be classified and identified by using a machine learning algorithm or a deep learning model; the deep learning model commonly used in the scene is ResNet, inception, efficientNet, and the like, and a model is trained again, so that a large number of plant images are used for learning, and automatic detection and diagnosis of plant diseases can be realized. After the information is obtained, the invention can provide a personalized decision scheme by using the agricultural intelligent decision Web service based on the support vector machine according to the basic condition of the existing crops, thereby helping farmers to quickly find out the problematic crops in the planting process and quickly process the problematic crops according to the decision scheme provided by the system.

Description

Crop appearance characteristic detecting system
Technical Field
The invention relates to a crop appearance characteristic detection system, and belongs to the technical field of agricultural artificial intelligence application.
Background
The appearance of crops can usually show the health condition, and in order to cultivate crops with higher quality, the growth state of the crops in the planting process, such as leaf color and form, the stout degree of branches and stems, the condition of flowers and fruits and the like, need to be closely concerned. Therefore, by identifying the appearance characteristics of the plants, the health problems of the plants can be found as early as possible, and targeted measures can be taken for protection and treatment.
At present, a farmer can accurately know soil characteristics, plant diseases and insect pests and the like through a remote sensing technology and sensor deployment in a farmland, and the like, and the production efficiency and the resource utilization efficiency are improved to a certain extent.
Disclosure of Invention
In order to solve the technical problems, the invention provides a crop appearance characteristic detection system which does not need to install a sensor on crops, has little influence on the growth of the crops and needs lower cost.
The invention is realized by the following technical scheme.
The invention provides a crop appearance feature detection system, which comprises an image processing and feature extraction module, a growth condition feedback module and a cultivation scheme recommendation module, wherein the image processing and feature extraction module comprises an image processing unit and a feature extraction unit, and the image processing unit is used for denoising an acquired crop image and adjusting the size; the feature extraction unit acquires key features from the image of the crop by training a deep learning model and using a green vegetation extraction algorithm;
the growth condition feedback module comprises a growth condition model training unit, a growth condition judging unit and a nutrition deficiency component feedback unit, wherein the growth condition model training unit is used for training a model capable of identifying the growth condition by using data with labels after analysis based on a machine learning algorithm; the growth condition judging unit judges the growth condition of crops by inputting the collected crop image data and the characteristic data based on the trained model; the nutrition missing component feedback unit accurately judges which nutrition components the crops lack by inputting the image data and the characteristic data of the crops based on the support vector machine model;
the cultivation scheme recommending module comprises a cultivation scheme generating unit, and the cultivation scheme generating unit generates a corresponding cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine according to the growth condition and the nutrition deficiency condition, wherein the cultivation scheme comprises soil trace element adjustment, illumination adjustment and irrigation frequency adjustment.
The system also comprises a data collection module, a data analysis module and a user feedback and service module, wherein:
the data collection module comprises an image collection unit and a database unit, wherein the image collection unit is used for collecting image data of plants; the database unit is used for storing the collected image data;
the data analysis module comprises a data analysis unit which analyzes the collected image data and characteristic data;
the user feedback and service module comprises a user uploading photo unit, a user feedback unit and a GPT unit, wherein the user uploading photo unit is that the user uploads crop image information, the system automatically analyzes the image and gives the user feedback of the crop on growth conditions and nutritional ingredients, and gives a cultivation scheme; the user feedback unit is a platform for user to communicate with developer opinion and is used for collecting feedback after user use and improving and updating the system; the GPT unit provides personalized knowledge and services related to crops for users by using natural language processing technology.
After preprocessing the image, the features of the plant image are extracted using a deep learning method.
The deep learning method comprises a convolution layer, a pooling layer and an activation function in the convolutional neural network CNN.
Machine learning algorithms or deep learning models are used to classify and identify the health of plants in the images.
The deep learning model is one of ResNet, inception, efficientNet.
The green vegetation extraction algorithm is one of an Excessgreen image, a CIVE, an AP-HI and a multi-threshold image segmentation method.
And collecting the appearance image of the crops in real time through a camera array.
The invention has the beneficial effects that: by analyzing the image data, the health condition of crops can be more comprehensively known, a current cultivation scheme most suitable for the crops is given according to the condition, and the quality of the crops is improved; the function of the system can be expanded according to the image information, and the system has good expansibility.
Drawings
FIG. 1 is a block diagram of a system of the present invention;
fig. 2 is a flow chart of the present invention for training a res net model.
Detailed Description
The technical solution of the present invention is further described below, but the scope of the claimed invention is not limited to the above.
Example 1
As shown in fig. 1, the data collection module is used for collecting storage data, the image processing and feature extraction module is used for processing images and extracting feature data in the images, the images can be processed according to a unified standard and the feature data in the images can be extracted, the data analysis module is used for analyzing and processing the extracted data, the growth condition feedback module is used for feeding back the current health condition of crops, the cultivation scheme recommendation module is used for giving an individualized cultivation scheme according to the current condition of the crops, and the user feedback and service module is used for collecting feedback information of users and providing services in the aspect of crop cultivation.
Specifically, the data collection module is used for collecting image data of plants and related characteristic condition data.
Specifically, the image processing and feature extraction module is used for denoising the collected images, adjusting the size and the like, and guaranteeing the consistency of the image quality. And meanwhile, key features of plant images, such as leaf lines, shapes, colors and the like, can be extracted by using a deep learning model and a green vegetation extraction algorithm.
Specifically, the data analysis module is used for analyzing the collected image data and the characteristic data.
Specifically, the growth condition feedback module is used for judging the growth condition and the nutrition deficiency condition of the plants. The core unit in the module is a nutrition deficiency component feedback unit, and the unit judges the plant deficiency condition by using a crop deficiency symptom diagnosis method based on a support vector machine, so that the accuracy is high.
Specifically, the cultivation scheme recommendation module recommends a cultivation scheme to be implemented at present according to the growth condition of plants.
Specifically, the user feedback and service module provides a feedback platform for the user, and the system is properly improved according to the feedback condition of the user. And the GPT is integrated in the module so that the user can learn the basic knowledge of crops more conveniently and provide better services.
Example 2
Preferably, the data collection module includes:
an image acquisition unit: the image acquisition unit acquires an image of the plant through the camera array.
Database unit: the database unit is used for storing the collected image data.
Preferably, the image processing and feature extraction module includes:
an image processing unit: the image processing unit performs functions of denoising, adjusting the size and the like on the acquired crop image so as to ensure consistency of image quality.
Feature extraction unit: the feature extraction unit is used for acquiring key features from the image of the crop by training a deep learning model and using a green vegetation extraction algorithm. A CNN architecture widely used in the field of computer vision is used in this unit. Such as ResNet, inception, efficientNet, etc. Green vegetation extraction algorithms that can be used are ExcessGreen (ExG), CIVE, AP-HI algorithm, multi-threshold image segmentation methods, and the like.
The convolution layer, pooling layer, and activation function in these architectures may perform feature extraction on the image. Wherein the convolution layer uses a convolution kernel to extract local features of the image. The convolution kernel slides on the input image, and the local area of each position is multiplied by the convolution kernel, and then the results are added to obtain a new eigenvalue. This process is actually detecting edges, textures and other local features in the input image. The pooling layer is used to reduce the size of the feature map, thereby reducing computational complexity and extracting the most important features. Maximum pooling is a common pooling approach that selects the largest feature value within each region as the pooling result for that region, so that the most significant features can be preserved. The activation function introduces nonlinearities into the CNN, allowing the network to learn the nonlinearity characteristics.
Green vegetation extraction is by analyzing color features in an image, typically operating in different color spaces (e.g., RGB, HSV, etc.) to identify and separate plant parts in the image. Among them, one of the most commonly used color spaces is the RGB color space. In this process, we calculate the frequency of color occurrence in different areas by histogram statistics based on the frequency of green pixels in the image, so as to effectively extract the green vegetation area.
Preferably, the data analysis module includes:
a data analysis unit: the data analysis unit is responsible for analyzing the collected image data and the feature data.
Preferably, the growth condition feedback module includes:
growth condition model training unit: the growth condition model training unit is based on a machine learning algorithm and uses the data with labels after analysis to train a model capable of identifying growth conditions.
Growth condition judging unit: the growth condition judging unit judges the growth condition of the crops by inputting the collected image data and characteristic data of the crops based on the trained model.
Nutrition deficiency component feedback unit: in the nutrition missing component feedback unit, a support vector machine model is used for accurately judging which nutrition components are missing by inputting image data and characteristic data of crops. The input vector of the support vector machine model is the green vegetation characteristics extracted by the characteristic extraction unit in the data collection module, and the characteristics are obtained by a color characteristic extraction algorithm. By taking these features as inputs, the support vector machine can achieve accurate diagnosis of crop pixel deficiency symptoms in a high-dimensional feature space. This process can be seen as a nonlinear mapping of the input vector of the least squares support vector machine to a high dimensional space to solve the convex optimization problem. The specific expression is as follows:
wherein ω is a weight, c is a penalty factor, ε is a prediction error, x is an input, y is an output, and b is a constant. Introduction of Lagrange multiplier alpha i And selects a radial basis function K (x i ,y i ) The decision function of the least squares support vector machine can be determined as:
wherein x is j Is the center of the gaussian kernel, and σ is the width of the gaussian kernel.
Preferably, the cultivation scheme recommendation module includes:
a culture scheme generation unit: the cultivation scheme generating unit generates a corresponding cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine according to the growth condition and the nutrition deficiency condition, wherein the cultivation scheme comprises soil trace element adjustment, illumination adjustment, irrigation frequency adjustment and the like.
In the intelligent decision Web service, the LIB support vector machine is used as a universal support vector machine software package, and the intelligent decision Web service based on the support vector machine is realized in the J2EE platform. An intelligent decision system, to be implemented with a support vector machine of the radial basis function (RBF, radial Basis Function) type, is provided to the user in the form of a Web service, the function being:the resulting support vector machine is a radial basis function classifier. The user can obtain personalized decision results only by sending the current request to the remote service through the service. Meanwhile, the adoption of the Web service providing mode also enables the system to be more open, and is easier to maintain and update.
Preferably, the user feedback and service module includes:
the user uploads a photo unit: the user uploading photo unit is that the user uploads the crop image information by himself, the system automatically analyzes the image and gives the user feedback on the growth condition and the nutrition composition of the crop, and gives a cultivation scheme.
User feedback unit: the user feedback unit is a platform for users to communicate with developers' opinion. Feedback after the user uses can be collected, and improvement and update of the system in the future are facilitated.
GPT unit: the GPT unit provides personalized knowledge and services related to crops for users by using natural language processing technology. The system can understand the problems of users, provide detailed information and suggestions for the users in aspects of crop growth, disease control, planting skills and the like, generate personalized crop management suggestions based on user data, and realize data-driven decision support. In addition, the GPT unit also actively collects user feedback to continuously improve services, so that the GPT unit meets user requirements, and provides better agricultural support for farmers and crop growers.
Example 3
Crop of corn is exemplified.
The data collection module comprises an image collection unit and a database unit, wherein the image collection unit is responsible for data collection and preparation, and a camera array is used for collecting a large amount of corn image data; meanwhile, acquiring related characteristic condition data such as the shape, color, surface glossiness and the like of the corn, and recording the characteristic condition data of the corn; the database unit is responsible for storing these collected data in a database.
The image processing and feature extraction module comprises an image processing unit and a feature extraction unit, wherein the image processing unit is responsible for preprocessing the collected corn images according to a unified standard, and comprises the operations of denoising, cutting, resizing and the like on the images so as to improve the consistency of the images.
The feature processing unit is responsible for extracting features in the image, and adopts a ResNet model, and the used green vegetation extraction algorithm is a multi-threshold image segmentation method.
Further, the structure of the ResNet model used is shown in FIG. 2, using the corn images collected by the data collection module, which are 224x224 in size and have 3 channels, these images are taken as inputs to the ResNet model. The image passes through a convolution layer using a convolution kernel of size 7x7, step size 2, and padding of 3 pixels.
Specifically, the convolution operation converts the image into a feature map with more channels, and in order to accelerate the training process and enhance the robustness of the model, the feature map is normalized through a batch normalization layer; after the processing is finished, an activation function ReLU is used for applying to the feature map, all negative values are set to be zero, and the nonlinear expression capacity of the network is increased; the feature map is then reduced in size by half and the number of parameters of the feature map is reduced by the max pooling layer.
Further, the residual block is a basic unit of ResNet, firstly, a feature map passes through a convolution layer, and a convolution kernel with the size of 3x3 and filling of 1 pixel are used to obtain a new feature map; then processing through a batch normalization layer and a ReLU activation function; then, a convolution layer is passed, and a batch normalization layer and a ReLU activation function are passed; finally, the shorted connections (i.e., the input signature) are added to the convolved signature and the function is activated again by the ReLU.
Further, after the last residual block is processed, the feature map is subjected to dimension reduction through a global average pooling layer; and finally, mapping the feature map after the dimension reduction into a final feature vector through a full connection layer.
Based on the color characteristics of the corn in different growth stages, the corn is firstly segmented from the seedling emergence stage to the early mature stage by adopting the following method:
G-BT1&&G-RT2
g, B, R are the three channels of green, blue and red of the image, and T1 and T2 are the two segmentation thresholds. By statistical analysis and experimentation on corn images, t1=t2=6 was used here.
In the first step of corn segmentation, a segment deficiency occurs, especially in both cases: in intense sunlight, some parts of the corn leaf will appear bright in color due to reflection of light, appearing light white in the image. Many segmentation algorithms have difficulty effectively segmenting the corn leaf in this case. To address this segmentation defect, we can further refine by the following equation:
T3GT4&8&B
through experiments, two segmentation thresholds, namely T3 and T4, were used, set to 250 and 200, respectively. This improved approach works well in solving the problem of the loss of segmentation of the highlight white portions. However, this approach may also introduce noise generated by white items (e.g., white paper). To solve this problem, noise characteristics were analyzed, and the following formula was introduced to filter out noise introduced through the foregoing formula:
G>210&&G-R<5&&G-B<5&&abs(R-B)<4
in darker weather conditions, such as in overcast and rainy days, the color of the corn leaf may appear light green or grayish green, which is also one of the reasons that many algorithms cannot complete the segmentation. To address this segmentation deficiency, the segmentation is performed by statistically analyzing the missing maize leaf features using the following formula:
G-RT9&&abs(G-B)T10
t5 and T6 are two segmentation thresholds, with t9=15 and t10=5 experimentally.
The multi-threshold image segmentation method improves the segmentation results of three types of images compared with the traditional ExG, CIVE and HI methods. The improvement is that the multi-threshold segmentation method is correspondingly adjusted to the numerical value change of the image pixels when the multi-threshold segmentation method is used for coping with challenges under different illumination conditions, especially when the illumination conditions are greatly changed, so as to adapt to the change of the color component values of the image caused by the illumination change. The multi-threshold image segmentation method can provide more accurate segmentation results compared with the traditional color feature extraction method under the condition of obvious illumination change.
Inputting the preprocessed corn image into a network through forward propagation, and automatically extracting local features such as shape, color and the like of the corn image through the network through the learning of a plurality of convolution layers; and then, the feature extraction unit fuses the feature graphs of different convolution layers, and the feature graphs are realized through modes of connection, addition, using an attention mechanism and the like, so that the abstraction and expression capacity of the model on the corn image is improved, and the fused features are input into a full connection layer for classifying and identifying the corn image.
In the optimization and training stage, network parameters are updated through a back propagation algorithm, so that the model can better extract key features of the corn image and accurately classify and identify the corn image. And finally, evaluating the trained model by using a verification set, and calculating indexes such as accuracy, precision, recall rate and the like to evaluate the performance of the model, so as to ensure that the extracted corn image features have better performance and generalization capability. In the whole process, the image processing unit and the feature extraction unit cooperate with each other to realize the functions of automatically learning and extracting key features of the corn image.
The data analysis module comprises a data analysis unit which is responsible for data analysis and model training, and the relevant characteristics of the corn growth condition and nutrition deficiency are known through analysis operation on the image data and the characteristic data; after analysis of the relevant data, a model is trained using these data to analyze maize growth and nutrient loss. The basic working thought of the model is as follows: judging the growth condition of the corn according to the shape, color, size and other information of the image of the corn; and judging the nutritional ingredient deficiency information according to the color and size data of the corn. Here the training of the model is performed using the friet Dataset. The data are divided into a training set, a verification set and a test set, the data are used for training and optimizing the model, and the accuracy and generalization capability of the model are improved through iterative training and adjusting of model parameters.
The growth condition feedback module is most important to a growth condition judging unit and a nutrition deficiency component feedback unit, wherein the growth condition judging unit judges the growth condition of corn, such as normal growth condition, growth limitation and the like, by inputting the corn image and characteristic data collected by using the camera array.
The nutrition deficiency component feedback unit is a core unit of the growth condition feedback module, and under the natural environment, a total of 40 corn leaf surface images are collected, wherein 20 corn leaf surface images are used as training samples, and the other 20 corn leaf surface images are used as diagnosis samples. These samples covered four different symptoms of corn deficiency, including normal status, nitrogen deficiency, potassium deficiency, and phosphorus deficiency.
In order to establish an accurate corn hypoid symptom diagnosis model, a binary coded genetic algorithm is adopted to determine the least square optimal combination parameters. Specifically, a population of 100 individuals is initialized and the number of iterations of the genetic algorithm for 100 generations is set. In the operation of the genetic algorithm, a crossover probability of 0.8 and a mutation probability of 0.2 are used. Limiting the parameters c and σ to the range [ 2] -5 ,2 10 ]And 10 bits of binary code are performed on the same. The goal of this genetic algorithm is to find the optimal combination of parameters to minimize the error in the corn's pixel-deficiency symptom image. Six color characteristic factors are extracted from 20 actual corn hypocrellin symptom images, so that a least square support vector machine model can be trained. Using this model, a further 20 corn images of unknown symptoms were diagnosed. The results show that the error between the calculated output value and the desired output value is very small. The method is a rapid and effective corn nutrition symptom classification diagnosis model, and can be used for analyzing the rest corn image data so as to obtain the corn nutrient deficiency condition.
The cultivation scheme recommendation module comprises a cultivation scheme generation unit which generates an optimal cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine based on the growth condition judgment, the nutrition deficiency detection result and the environmental condition. This includes soil trace element adjustment advice, fertilisation advice, irrigation frequency and intensity etc. The recommendation strategy may be based on past corn data and expertise.
The LIB support vector machine is used as a tool for realizing a support vector machine (support vector machine) algorithm, and is a universal support vector machine software package which is easy to use, simple to operate, quick and efficient.
Taking the classification problem of different decision schemes required by corns with different growth conditions as an example, the support vector machine algorithm uses radial basis functions as kernel functions, and the number and the center of the radial basis functions are automatically determined by the algorithm.
The general procedure for constructing a support vector machine model for RBF types based on LIB support vector machines is as follows:
1) The data is prepared according to the format required by the LIB support vector machine software package, and the formats of the training data and the test data are as follows: [1abel ] [ index1]: [ value1] [ index2]: value2 … …;
2) Carrying out necessary preprocessing on the data;
3) Selecting a radial basis function as a kernel function;
4) Selecting optimal parameters by cross-validation;
5) And training the whole training set by utilizing the optimal parameters to obtain a support vector machine model.
After the support vector machine model is obtained, the support vector machine prediction function in the LIB support vector machine can be utilized to classify or predict the corn growth condition.
Further, a support vector machine model of RBF (radial basis function) type constructed based on the LIB support vector machine is packaged to provide intelligent decision Web service. In this way, the user can make personalized decisions using this model by accessing the Web service.
Preferably, the user feedback and service module comprises a user uploading photo unit, a user feedback unit and a GPT unit, wherein the module designs a user friendly interface, and in the user uploading photo unit, the user is allowed to upload corn images in non-shooting time of the non-camera array and obtain a real-time feedback and recommended cultivation scheme. Secondly, the user interface should be clear, providing guidance and explanation of the user operation; the user feedback unit is used for collecting feedback and evaluation of the user so as to improve the accuracy and user experience of the system, and the feedback and evaluation are realized through user investigation and feedback tables.
Specifically, in the GPT unit, a trained GPT model needs to be integrated into the module. Firstly, collecting large-scale text data related to corn planting and agriculture, including information such as growth conditions, pest control, fertilization skills and the like; then, selecting a GPT model architecture suitable for natural language processing tasks; then, loading a pre-trained GPT model, wherein the model is subjected to universality training on large-scale text data; subsequently, tasks and related tag data, e.g., user questions, are defined as input, and related corn knowledge as output.
Further, a labeled dataset is collected, wherein the expert provides the correct output label for each input; next, the model is adapted to the specific agricultural domain by fine tuning to enable understanding of the language and knowledge associated with corn planting, during which parameters and super-parameters of the model are adjusted to optimize performance.
Further, the data is divided into small batches and the model weights are updated using back propagation and gradient descent to minimize the model prediction versus label gap. After the trimming is completed, the performance of the model is evaluated using a separate validation set. Finally, the best performing model is selected and used in actual practice to answer user queries about corn, continually monitor and refine the model to ensure that it provides accurate and useful information in a changing agricultural environment. This process requires a lot of data, computational resources and domain expertise to ensure the validity and reliability of the model.
In particular, once the GPT is integrated into the user feedback module, basic knowledge about corn and better services can be provided. Allowing the user to make questions or requests about corn on the designed interface, the user can enter questions on the interface, such as "how best to cultivate corn? "or" how do it detect diseases on maize leaves? Once the user has posed a problem, the GPT model begins processing these inputs and generating information related to the corn. For example, if a user inquires about how to cultivate corn, the GPT may generate text explaining the growth cycle of corn, optimal planting conditions (such as soil type, temperature and humidity requirements), fertilization recommendations, and pest control methods. Such information may include how to detect and treat diseases on corn leaves, thereby providing comprehensive guidance. Furthermore, the system may also generate personalized corn planting suggestions based on feedback information provided by the user, such as their geographic location, soil quality, and planting history, to help them optimize crop production. For example, if a user reports a particular problem on corn leaves, the system may provide particular disease identification and control recommendations. In order to protect the data privacy of the user, we take data encryption and anonymization measures and obey the relevant privacy regulations. The system also continuously monitors the user's feedback and uses the feedback to improve the accuracy and responsiveness of the GPT model. If the user-posed problem is not satisfied, the system records the queries to refine the training data for the model.
In summary, the camera array is used for collecting the images of the crops, the sensor is not required to be arranged on the crops, the number of cameras in the array is not large, the distance between the cameras and the crops is long, and the influence on the growth of the crops is small; through the analysis of the images, the current state of the crops can be known more comprehensively, the accurate adjustment of the nutrition condition is facilitated, and the high-quality growth of the crops is promoted. With the progress of technology, the shooting range of the camera is gradually increased, fewer cameras are required, the technology of image analysis and processing is mature, and the cost for using the system is lower. The system has good expansibility, and can be expanded at any time if more functional requirements based on image processing exist in the future. The GPT unit in the user feedback and service module provides convenient and personalized crop knowledge and service, helps farmers solve problems and improves production efficiency through natural language understanding technology, and meanwhile continuously collects user feedback to continuously improve, so that the GPT unit is an important intelligent support tool in the agricultural field and promotes sustainable development of modern agriculture.

Claims (8)

1. The utility model provides a crops appearance characteristic detecting system, includes image processing and characteristic extraction module, growth condition feedback module, cultivates scheme recommendation module, its characterized in that:
the image processing and feature extraction module comprises an image processing unit and a feature extraction unit, and the image processing unit is used for denoising the obtained crop image and adjusting the size; the feature extraction unit acquires key features from the image of the crop by training a deep learning model and using a green vegetation extraction algorithm;
the growth condition feedback module comprises a growth condition model training unit, a growth condition judging unit and a nutrition deficiency component feedback unit, wherein the growth condition model training unit is used for training a model capable of identifying the growth condition by using data with labels after analysis based on a machine learning algorithm; the growth condition judging unit judges the growth condition of crops by inputting the collected crop image data and the characteristic data based on the trained model; the nutrition missing component feedback unit accurately judges which nutrition components the crops lack by inputting the image data and the characteristic data of the crops based on the support vector machine model;
the cultivation scheme recommending module comprises a cultivation scheme generating unit, and the cultivation scheme generating unit generates a corresponding cultivation scheme by using an agricultural intelligent decision Web service based on a support vector machine according to the growth condition and the nutrition deficiency condition, wherein the cultivation scheme comprises soil trace element adjustment, illumination adjustment and irrigation frequency adjustment.
2. The crop profile feature detection system of claim 1, wherein: the system also comprises a data collection module, a data analysis module and a user feedback and service module, wherein:
the data collection module comprises an image collection unit and a database unit, wherein the image collection unit is used for collecting image data of plants; the database unit is used for storing the collected image data;
the data analysis module comprises a data analysis unit which analyzes the collected image data and characteristic data;
the user feedback and service module comprises a user uploading photo unit, a user feedback unit and a GPT unit, wherein the user uploading photo unit is that the user uploads crop image information, the system automatically analyzes the image and gives the user feedback of the crop on growth conditions and nutritional ingredients, and gives a cultivation scheme; the user feedback unit is a platform for user to communicate with developer opinion and is used for collecting feedback after user use and improving and updating the system; the GPT unit provides personalized knowledge and services related to crops for users by using natural language processing technology.
3. The crop profile feature detection system of claim 1, wherein: after preprocessing the image, the features of the plant image are extracted using a deep learning method.
4. A crop profile inspection system as claimed in claim 3, wherein: the deep learning method comprises a convolution layer, a pooling layer and an activation function in the convolutional neural network CNN.
5. The crop profile feature detection system of claim 1, wherein: machine learning algorithms or deep learning models are used to classify and identify the health of plants in the images.
6. The crop profile feature detection system of claim 5, wherein: the deep learning model is one of ResNet, inception, efficientNet.
7. The crop profile feature detection system of claim 1, wherein: the green vegetation extraction algorithm is one of an Excessgreen image, a CIVE, an AP-HI and a multi-threshold image segmentation method.
8. The crop profile feature detection system of claim 1, wherein: and collecting the appearance image of the crops in real time through a camera array.
CN202311092860.8A 2023-08-28 2023-08-28 Crop appearance characteristic detecting system Pending CN117152609A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311092860.8A CN117152609A (en) 2023-08-28 2023-08-28 Crop appearance characteristic detecting system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311092860.8A CN117152609A (en) 2023-08-28 2023-08-28 Crop appearance characteristic detecting system

Publications (1)

Publication Number Publication Date
CN117152609A true CN117152609A (en) 2023-12-01

Family

ID=88886080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311092860.8A Pending CN117152609A (en) 2023-08-28 2023-08-28 Crop appearance characteristic detecting system

Country Status (1)

Country Link
CN (1) CN117152609A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117707091A (en) * 2023-12-25 2024-03-15 盐城中科高通量计算研究院有限公司 Agricultural straw processing quality control system based on image processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063686A (en) * 2014-06-17 2014-09-24 中国科学院合肥物质科学研究院 System and method for performing interactive diagnosis on crop leaf segment disease images
CN107222682A (en) * 2017-07-11 2017-09-29 西南大学 Crop growth state testing method and device
CN110347127A (en) * 2019-06-26 2019-10-18 北京农业智能装备技术研究中心 Crop planting mandatory system and method based on cloud service
CN113221723A (en) * 2021-05-08 2021-08-06 余治梅 Traceable self-feedback learning urban plant factory
CN116524279A (en) * 2023-05-19 2023-08-01 广西科技师范学院 Artificial intelligent image recognition crop growth condition analysis method for digital agriculture
CN116645232A (en) * 2023-06-10 2023-08-25 海南玻色科技有限公司 Intelligent management system for agricultural cultivation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063686A (en) * 2014-06-17 2014-09-24 中国科学院合肥物质科学研究院 System and method for performing interactive diagnosis on crop leaf segment disease images
CN107222682A (en) * 2017-07-11 2017-09-29 西南大学 Crop growth state testing method and device
CN110347127A (en) * 2019-06-26 2019-10-18 北京农业智能装备技术研究中心 Crop planting mandatory system and method based on cloud service
CN113221723A (en) * 2021-05-08 2021-08-06 余治梅 Traceable self-feedback learning urban plant factory
CN116524279A (en) * 2023-05-19 2023-08-01 广西科技师范学院 Artificial intelligent image recognition crop growth condition analysis method for digital agriculture
CN116645232A (en) * 2023-06-10 2023-08-25 海南玻色科技有限公司 Intelligent management system for agricultural cultivation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵春江: ""农业知识智能服务技术综述"", 智慧农业》, vol. 50, no. 2, 30 June 2023 (2023-06-30), pages 1 - 17 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117707091A (en) * 2023-12-25 2024-03-15 盐城中科高通量计算研究院有限公司 Agricultural straw processing quality control system based on image processing

Similar Documents

Publication Publication Date Title
Abdullahi et al. Convolution neural network in precision agriculture for plant image recognition and classification
Zhou et al. Wheat ears counting in field conditions based on multi-feature optimization and TWSVM
Mishra et al. A Deep Learning-Based Novel Approach for Weed Growth Estimation.
Ninomiya High-throughput field crop phenotyping: current status and challenges
CN113657158B (en) Google EARTH ENGINE-based large-scale soybean planting area extraction algorithm
CN111553240A (en) Corn disease condition grading method and system and computer equipment
Lamba et al. Optimized classification model for plant diseases using generative adversarial networks
Manohar et al. Image processing system based identification and classification of leaf disease: A case study on paddy leaf
Paymode et al. Tomato leaf disease detection and classification using convolution neural network
CN106846334A (en) Field corn plant recognition methods based on Support Vector data description
Bhuyar et al. Crop classification with multi-temporal satellite image data
Bilal et al. Increasing crop quality and yield with a machine learning-based crop monitoring system
Balasubramaniyan et al. Color contour texture based peanut classification using deep spread spectral features classification model for assortment identification
CN117152609A (en) Crop appearance characteristic detecting system
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
Chaudhari et al. Detection and Classification of Banana Leaf Disease Using Novel Segmentation and Ensemble Machine Learning Approach
Valente et al. Fast classification of large germinated fields via high-resolution UAV imagery
Calma et al. Cassava Disease Detection using MobileNetV3 Algorithm through Augmented Stem and Leaf Images
Chauhan et al. Deep residual neural network for plant seedling image classification
Terzi et al. Automatic detection of grape varieties with the newly proposed CNN model using ampelographic characteristics
CN114663652A (en) Image processing method, image processing apparatus, management system, electronic device, and storage medium
Mahilraj et al. Detection of Tomato leaf diseases using Attention Embedded Hyper-parameter Learning Optimization in CNN
Ramasamy et al. Classification of Nutrient Deficiencies in Plants Using Recurrent Neural Network
CN118072251B (en) Tobacco pest identification method, medium and system
CN118172676B (en) Farmland pest detection method based on quantum deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination