CN111611972A - Crop leaf type identification method based on multi-view multi-task ensemble learning - Google Patents
Crop leaf type identification method based on multi-view multi-task ensemble learning Download PDFInfo
- Publication number
- CN111611972A CN111611972A CN202010485899.6A CN202010485899A CN111611972A CN 111611972 A CN111611972 A CN 111611972A CN 202010485899 A CN202010485899 A CN 202010485899A CN 111611972 A CN111611972 A CN 111611972A
- Authority
- CN
- China
- Prior art keywords
- learning
- view
- model
- models
- views
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012795 verification Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 8
- 238000013135 deep learning Methods 0.000 claims description 7
- 238000010187 selection method Methods 0.000 claims description 2
- 238000012549 training Methods 0.000 abstract description 11
- 238000013136 deep learning model Methods 0.000 abstract description 4
- 201000010099 disease Diseases 0.000 description 17
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 17
- 238000013527 convolutional neural network Methods 0.000 description 16
- 238000011176 pooling Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 230000003631 expected effect Effects 0.000 description 4
- 241000607479 Yersinia pestis Species 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000008014 freezing Effects 0.000 description 2
- 238000007710 freezing Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241000238631 Hexapoda Species 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a crop leaf type identification method based on multi-view and multi-task ensemble learning, which comprises the steps of selecting leaf images as original data sets, and extracting features to obtain data sets under a plurality of views; the CNN model is used as a base learner, and independent integrated learning is respectively carried out on the data set and the original data set under a plurality of views; then fixing parameters of all the base learners, removing the last layer of the fully-connected classifiers in the base learners, splicing the outputs of all the models, adding a new classifier, and performing combined feature selection on a plurality of views to enable the verification set accuracy to reach an expected value to obtain models under a plurality of views; and then, identifying the types of the blades by utilizing multi-task learning. The method strengthens the accuracy and generalization ability of the model, and integrally solves the problems of weak generalization ability caused by insufficient training data of the traditional deep learning model and simple stacking depth of the model.
Description
Technical Field
The invention belongs to the field of artificial intelligence, and provides an improved method for improving the effects of identifying crop leaves and diseases thereof on the basis of a traditional deep learning model.
Background
The problem of grain safety is becoming more and more serious. There are many factors that are threatening food safety, with plant diseases constituting a serious threat to food safety on a global scale. In the past, most of crop diseases are identified manually, but the manual identification has many defects. With the rise of precision agriculture, the information technology is used for assisting agricultural production to provide a new idea for identifying crop diseases, the image processing technology is one of the technologies, and the method has various advantages which are not possessed by the traditional method for identifying the crop diseases, namely strong real-time performance, high speed and low misjudgment rate, and even can provide a necessary method for preventing and controlling disease propagation in time.
The difficulty of identifying crop diseases through images mainly lies in image segmentation, feature extraction and classification identification.
The main methods for solving the difficulties include a threshold segmentation method, an edge detection method, a mathematical morphology method, a support vector machine method, a fuzzy clustering method and the like. Although these methods have achieved good classification results, they employ traditional machine learning methods to identify diseases by combining low-level visual features with a variety of algorithms. This has resulted in their limitations as well.
The threshold segmentation method has the characteristics of simplicity and high execution efficiency, but the selection of the threshold and the characteristics of the color, texture and the like of the crop pest and disease area are often greatly different from those of the non-pest area. The segmentation efficiency of the edge detection method depends on an edge detection operator, and the robustness is poor. The disadvantage of mathematical morphology is that the object, which consists of a union, intersection and difference of various geometric primitives, differs somewhat from the human perception of shape. The fuzzy clustering method has low convergence speed, the classification number and other limitations must be determined first, and the performance of the support vector machine method depends too much on the kernel function and the training speed of the samples. In addition, the time points for extracting the features by the method are too single, most of feature extraction is carried out when the disease and insect pest symptoms of crops are very obvious, the real-time performance is seriously influenced, and early identification and prevention and control cannot be realized. The sensitivity to noise and initialization data causes the influence of segmentation accuracy to be particularly prominent in the segmentation of crop disease images with complex growing environments, such as when the image background is complex or the leaves are powdery, and the identification work is difficult. Meanwhile, most of the characters depend on characteristics made by hands, and semantic problem gaps cannot be solved.
In contrast to them, CNN, as a deep learning model, can automatically discover higher and higher levels of features from data and has enjoyed significant success in many different areas. Particularly in the field of image recognition, CNN has stable performance when the learning data is sufficient. For a general large-scale image classification problem, the convolutional neural network can be used for constructing a hierarchical classifier (hierarchical classifier) and can also be used for extracting discriminant features of an image in fine-classification recognition (fine-grained-classification recognition) for other classifiers to learn. For the latter, feature extraction can be performed by artificially inputting different parts of an image into a convolutional neural network respectively, or by the convolutional neural network through unsupervised learning. These data indicate that CNN has achieved great success in the field of image recognition, and therefore, in recent years, many researchers have used the CNN method to diagnose plant diseases.
At present, although the image recognition method based on deep learning is better than many traditional algorithms in accuracy, a large number of models for crop recognition still face the problem of general model generalization capability, and the main reasons are two: one is the limitation of the size of the data set. The manual acquisition and labeling of crop disease pictures are time-consuming and labor-consuming, so that data available for model training are few, and the traditional solution is to enhance an expansion data set by using data, but the degree of improving the generalization capability of the model is very limited. Secondly, a large number of models for crop identification are simply stacked by using a convolutional neural network and a full connection layer and are realized under the thought of a single view, however, different views of an object describe different characteristics of the object, and the generalization capability of the models is not strong.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a crop leaf type identification method based on multi-view multi-task ensemble learning, which is high in model precision and high in generalization capability.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: a crop leaf variety identification method based on multi-view and multi-task ensemble learning,
firstly, selecting a leaf image as an original data set, and performing feature extraction on the original data set to obtain a plurality of data sets under a single view;
then, a CNN model is used as a base learner, and independent integrated learning is respectively carried out on the data set and the original data set under a plurality of single views;
after the independent integrated learning is finished, parameters of all the base learners are fixed, the last layer of all the fully connected classifiers in all the base learners is removed, then the outputs of all the CNN models are spliced, new fully connected classifiers are added, and the combined feature selection is carried out on a plurality of views, so that the accuracy of a verification set reaches an expected value, the models under a plurality of views are obtained, and the multi-view integrated learning is finished;
and then, multi-task learning is utilized, the learned feature expressions of different tasks are shared, and the blade types are identified.
The technical scheme is further designed as follows: the selection method of the base learner comprises the following steps: selecting a deep learning image recognition model to form a model family, numbering each model in the model family, and randomly selecting a certain number of models from the models to serve as all base learners of a data set under a single view; for the raw data set, all models in the model family are used as basis learners.
The deep learning image recognition model includes GoogleNet, VGG, Resnet, and the like.
The loss function in the multi-view ensemble learning is as follows:
compared with the prior art, the technical scheme of the invention has the following beneficial effects:
the invention adopts CNN ensemble learning, and takes various mature CNN models as a base learner. Based on the method, a model is trained by utilizing multi-view learning according to five views of the edge, the gray level, the texture and the original picture of the crop leaf. And finally, sharing the learned feature representations of different tasks by utilizing multi-task learning. The accuracy and the generalization ability of the model are strengthened, and the problems that the traditional deep learning model is insufficient in training data and the model is weak in generalization ability due to the fact that the model is simply stacked in depth are integrally solved.
The method adopts multi-task learning, and realizes hard sharing of hidden layer parameters by sharing hidden layers among all tasks and reserving output layers of specific tasks, so that a plurality of tasks share learned feature expressions of different tasks in parallel training, and the risk of overfitting is reduced.
Drawings
FIG. 1 is a model design of the present invention;
FIG. 2 is a flow chart of model training of the present invention;
FIG. 3 is a view showing the structure of a VGG model;
FIG. 4 is an ensemble learning model of a picture extracted by gray scale;
FIG. 5 is a picture taken by texture;
FIG. 6 is a diagram of a multi-view learning architecture;
FIG. 7 is a multi-task learning classifier.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
The model design of the crop leaf type identification method based on multi-view and multi-task ensemble learning is shown in fig. 1, firstly, a leaf image is taken as an original data set, and feature extraction is carried out on the original data set to obtain data sets under a plurality of views; the CNN model is used as a base learner, and independent integrated learning is respectively carried out on the data set and the original data set under a plurality of views;
the original data set picture can obtain pictures under different views, such as gray scale, texture, edge, texture and the like, through a specific convolution kernel (fig. 5 shows a new picture extracted according to the texture). These convolution kernels need to be designed and we can set the parameters of the convolution kernels to (-1, 0, 1; -2, 0, 2; -1, 0, 1) if we want to get the picture texture. The parameters of the convolution kernel can be designed according to the requirements of a specific task. After the original data set is extracted, data sets under a plurality of different views can be obtained.
The pictures of these different views are trained separately in the different views together with the original picture. The view definition is shown as formula (1);
wherein the total number of views is 1+ | view |, v0Representing views that have not passed feature extraction, i.e. original pictures, and vi(1. ltoreq. i.ltoreq. view) represents a view extracted by the ith (1. ltoreq. i.ltoreq. view) feature extraction method.
The model training process in this embodiment is shown in fig. 2, and the integrated learning process is as follows:
first, we define a model family, which is defined as shown in formula (2), where the total number of models is |.
Here we will use some mature deep learning image recognition models such as GoogleNet, VGG, Resnet, etc. to form model families. For example, the VGG deep convolutional neural network explores the relationship between the depth and the performance of the convolutional neural network, and a convolutional neural network with 16-19 layers of depth is successfully constructed by repeatedly stacking 3 x 3 small convolutional kernels and 2 x 2 maximum pooling layers. VGGNet has been used to extract features of images to date. The network structure of VGGNet is shown in fig. 3. VGGNet contains many levels of networking, varying in depth from 11 levels to 19 levels, with VGGNet-16 and VGGNet-19 being more common. VGGNet divides the network into 5 segments, each segment connecting multiple 3 x 3 convolutional networks in series, each segment of convolution is followed by a max pooling layer, and finally 3 full connection layers and a softmax layer.
Each model in the family of models is then numbered and a certain number of the models are then randomly selected as all the basis learners under one view. For example, five models are selected to form a model family, which are numbered as 1, 2, 3, 4 and 5. For example, in grayscale view, we select three models from the family of models as all the basis learners in grayscale images, assuming random selection as 1, 2, 5, then the 1 st, 2 nd, 5 th models will be all the models in grayscale images. Under the original view, we will use all models without random selection.
For a single model in the family of models, a new fully-connected classifier needs to be designed according to the class and scale of our own dataset. As under the keras framework, we can set multiple fully-connected layers according to the scale of a specific data set, and the activation functions of the first few fully-connected layers can be set to be 'relu', and the functions of the several layers are used for learning characteristics; the last layer is set as "softmax", and the function of the last layer is used for calculating the probability that the picture belongs to each category; we then need to freeze part of the convolution bases of the individual models. Typically in deep learning, one or three convolutional layers and one pooling layer are taken as a group, and then several groups constitute the convolutional basis. The freezing operation here is typically freezing all but the last set of convolutional and pooling layers. Under the keras framework, we can freeze the convolutional layer (pooling layer) by setting its train parameter to False. Next, the model is trained on a specific data set (pictures corresponding to different views) so that the accuracy of the validation set achieves the desired effect. The last groups of convolution bases are then thawed, and the last third and second groups can generally be thawed. We can thaw the convolutional layer (pooling layer) with its traceable parameter set to True. And finally, jointly training the model to ensure that the accuracy of the verification set reaches the expected effect. After training we will freeze the entire base learner (single model).
Under a single view, the last layer of all the fully-connected classifiers in all the base learners is removed, then the outputs of all the models (after the last layer is removed, the output is the features, but not the categories) are spliced, new classifiers are added, and then a plurality of base learners are subjected to ensemble learning, so that the accuracy of the verification set reaches the expected effect, and thus the ensemble learning model under the single view can be obtained, as shown in fig. 4.
And step two, in a single view, after the effect of the ensemble learning is expected, the parameters of all the base learners under the single view are fixed. And then removing the last layer of the fully connected classifier in all the single views, splicing the outputs of all the models, adding a new classifier, and performing combined feature selection on the multiple views to ensure that the accuracy of the verification set reaches the expected effect, so that the models under the multiple views can be obtained, and the integrated learning of the multiple views is completed. Part of the structure is shown in fig. 6.
The loss function involved in multi-view ensemble learning is as follows:
we define the dataset under the ith (0. ltoreq. i. ltoreq. view) view asWherein Since the dimension of the picture is uncertain, the jth sample xijIs not a deterministic number (it is then preprocessed so that all input sample dimensions of the basis learner are the same, but different basis learning is allowedThe input samples of the learner are different. ) And the dimensions of the labels are the same and are the total number K of the categories.
Suppose we are in the ith view viNext, the t-th basis learner m is usedtThe softmax is used for multi-classification, the loss function uses multi-classification cross entropy, and the view v can be obtainediSample j below in the base learner mtThe probability of the following being of the kth class is formula (3)
The loss function of the jth sample is thus
Then we can get the view v at the ithiAt the t-th base learner mtThe loss function of
Since we are in view v0All base learners are used, while the model for integrated learning under the ith (1 ≦ i ≦ view |) view is randomly selected, with a total number of p (p) for random selection<Model |). We can get the view v0The model selected below is mi,m2,…,m|modej|The model selected under the ith (1 ≦ i ≦ view |) view isWe use regularization constraints under each model of each viewWe can get the loss function under multiple views as
The overall loss function is:
and step three, sharing learned feature representations of different tasks by utilizing multi-task learning, and identifying the blade types.
The core idea of multi-task learning is that a plurality of tasks are trained in parallel and share the learned feature representation of different tasks, so that the learned features of the model can be fully utilized, and resources are saved. If three tasks are selected, the types of crops, the types of diseases and the severity of the diseases are respectively selected. In our network we removed the last layer of the original fully-connected classifier and then added the classifier shown in fig. 7 after the penultimate layer, where the first task is for classification of the crop species, the activation function of the last layer can be set to "softmax" in keras, the second task is for classification of diseases, the activation function of the last layer can be set to "softmax" in keras, the third is for prediction of disease severity, and the activation function of the last layer can be set to "mse" in keras. And then, fixing the model parameters after multi-view learning, and then training to ensure that the accuracy of the verification set reaches the expected effect, thereby realizing multi-task learning. Multitask learning is an inductive migration method that makes full use of domain-specific information implicit in a plurality of related task training signals. In the backward propagation process, the multi-task learning allows the characteristics dedicated to a certain task in the shared hidden layer to be used by other tasks; multitask learning will allow learning features that are applicable to several different tasks, which are often not easily learned in a single task learning network.
The technical solutions of the present invention are not limited to the above embodiments, and all technical solutions obtained by using equivalent substitution modes fall within the scope of the present invention.
Claims (4)
1. A crop leaf type identification method based on multi-view and multi-task ensemble learning is characterized in that:
selecting a leaf image as an original data set, and performing feature extraction on the original data set to obtain a plurality of data sets under a single view;
the CNN model is used as a base learner, and independent integrated learning is respectively carried out on a data set and an original data set under a plurality of single views;
after the independent integrated learning is finished, parameters of all the base learners are fixed, the last layer of all the fully connected classifiers in all the base learners is removed, then the outputs of all the CNN models are spliced, new fully connected classifiers are added, and the combined feature selection is carried out on a plurality of views, so that the accuracy of a verification set reaches an expected value, the models under a plurality of views are obtained, and the multi-view integrated learning is finished;
and (3) utilizing multi-task learning to share the learned feature representations of different tasks and identify the blade types.
2. The crop leaf type identification method based on multi-view and multi-task ensemble learning as claimed in claim 1, wherein the selection method of the base learner is as follows: selecting a deep learning image recognition model to form a model family, numbering each model in the model family, and randomly selecting a certain number of models from the models to serve as all base learners of a data set under a single view; for the raw data set, all models in the model family are used as basis learners.
3. The method for identifying the crop leaf types based on the multi-view and multi-task ensemble learning as claimed in claim 2, wherein: the deep learning image recognition model includes GoogleNet, VGG, Resnet, and the like.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010485899.6A CN111611972B (en) | 2020-06-01 | 2020-06-01 | Crop leaf type identification method based on multi-view multi-task integrated learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010485899.6A CN111611972B (en) | 2020-06-01 | 2020-06-01 | Crop leaf type identification method based on multi-view multi-task integrated learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111611972A true CN111611972A (en) | 2020-09-01 |
CN111611972B CN111611972B (en) | 2024-01-05 |
Family
ID=72201702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010485899.6A Active CN111611972B (en) | 2020-06-01 | 2020-06-01 | Crop leaf type identification method based on multi-view multi-task integrated learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111611972B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418219A (en) * | 2020-11-24 | 2021-02-26 | 广东工业大学 | Method for identifying color and shape of garment fabric cut piece and related device |
CN112712106A (en) * | 2020-12-07 | 2021-04-27 | 西安交通大学 | Mechanical equipment health state identification method based on multi-view confrontation self-encoder |
CN113191391A (en) * | 2021-04-07 | 2021-07-30 | 浙江省交通运输科学研究院 | Road disease classification method aiming at three-dimensional ground penetrating radar map |
WO2023109319A1 (en) * | 2021-12-14 | 2023-06-22 | Ping An Technology (Shenzhen) Co., Ltd. | Systems and methods for crop disease diagnosis |
CN116523136A (en) * | 2023-05-05 | 2023-08-01 | 中国自然资源航空物探遥感中心 | Mineral resource space intelligent prediction method and device based on multi-model integrated learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845549A (en) * | 2017-01-22 | 2017-06-13 | 珠海习悦信息技术有限公司 | A kind of method and device of the scene based on multi-task learning and target identification |
CN109508650A (en) * | 2018-10-23 | 2019-03-22 | 浙江农林大学 | A kind of wood recognition method based on transfer learning |
-
2020
- 2020-06-01 CN CN202010485899.6A patent/CN111611972B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845549A (en) * | 2017-01-22 | 2017-06-13 | 珠海习悦信息技术有限公司 | A kind of method and device of the scene based on multi-task learning and target identification |
CN109508650A (en) * | 2018-10-23 | 2019-03-22 | 浙江农林大学 | A kind of wood recognition method based on transfer learning |
Non-Patent Citations (3)
Title |
---|
HOUJEUNG HAN ET AL.: "MULTI-VIEW VISUAL SPEECH RECOGNITION BASED ON MULTI TASK LEARNING", 《IEEE》 * |
何雪梅: "多视图聚类算法综述", 《软件导刊》 * |
许景辉等: "基于迁移学习的卷积神经网络玉米病害图像识别", 《农业机械学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418219A (en) * | 2020-11-24 | 2021-02-26 | 广东工业大学 | Method for identifying color and shape of garment fabric cut piece and related device |
CN112712106A (en) * | 2020-12-07 | 2021-04-27 | 西安交通大学 | Mechanical equipment health state identification method based on multi-view confrontation self-encoder |
CN112712106B (en) * | 2020-12-07 | 2022-12-09 | 西安交通大学 | Mechanical equipment health state identification method based on multi-view confrontation self-encoder |
CN113191391A (en) * | 2021-04-07 | 2021-07-30 | 浙江省交通运输科学研究院 | Road disease classification method aiming at three-dimensional ground penetrating radar map |
WO2023109319A1 (en) * | 2021-12-14 | 2023-06-22 | Ping An Technology (Shenzhen) Co., Ltd. | Systems and methods for crop disease diagnosis |
CN116523136A (en) * | 2023-05-05 | 2023-08-01 | 中国自然资源航空物探遥感中心 | Mineral resource space intelligent prediction method and device based on multi-model integrated learning |
Also Published As
Publication number | Publication date |
---|---|
CN111611972B (en) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111611972A (en) | Crop leaf type identification method based on multi-view multi-task ensemble learning | |
Kuo et al. | Green learning: Introduction, examples and outlook | |
CN112347970B (en) | Remote sensing image ground object identification method based on graph convolution neural network | |
CN112883839B (en) | Remote sensing image interpretation method based on adaptive sample set construction and deep learning | |
CN108052966A (en) | Remote sensing images scene based on convolutional neural networks automatically extracts and sorting technique | |
CN113705371B (en) | Water visual scene segmentation method and device | |
CN106874862B (en) | Crowd counting method based on sub-model technology and semi-supervised learning | |
Kolli et al. | Plant disease detection using convolutional neural network | |
CN111524140A (en) | Medical image semantic segmentation method based on CNN and random forest method | |
Kundur et al. | Deep convolutional neural network architecture for plant seedling classification | |
Chen-McCaig et al. | Convolutional neural networks for texture recognition using transfer learning | |
CN114612450A (en) | Image detection segmentation method and system based on data augmentation machine vision and electronic equipment | |
Hu et al. | Learning salient features for flower classification using convolutional neural network | |
Shkanaev et al. | Unsupervised domain adaptation for DNN-based automated harvesting | |
CN110853052A (en) | Tujia brocade pattern primitive segmentation method based on deep learning | |
CN112907503A (en) | Penaeus vannamei Boone quality detection method based on adaptive convolutional neural network | |
CN115100509B (en) | Image identification method and system based on multi-branch block-level attention enhancement network | |
CN111310838A (en) | Drug effect image classification and identification method based on depth Gabor network | |
CN114581470B (en) | Image edge detection method based on plant community behaviors | |
CN113723456B (en) | Automatic astronomical image classification method and system based on unsupervised machine learning | |
CN115511838A (en) | Plant disease high-precision identification method based on group intelligent optimization | |
CN113095235B (en) | Image target detection method, system and device based on weak supervision and discrimination mechanism | |
CN114624715A (en) | Radar echo extrapolation method based on self-attention space-time neural network model | |
Fan et al. | Corn Diseases Recognition Method Based on Multi-feature Fusion and Improved Deep Belief Network | |
Zhang et al. | An automatic detection model of pulmonary nodules based on deep belief network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |