CN111488917A - Garbage image fine-grained classification method based on incremental learning - Google Patents
Garbage image fine-grained classification method based on incremental learning Download PDFInfo
- Publication number
- CN111488917A CN111488917A CN202010198397.5A CN202010198397A CN111488917A CN 111488917 A CN111488917 A CN 111488917A CN 202010198397 A CN202010198397 A CN 202010198397A CN 111488917 A CN111488917 A CN 111488917A
- Authority
- CN
- China
- Prior art keywords
- class
- incremental
- garbage
- data set
- old
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000010813 municipal solid waste Substances 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000000605 extraction Methods 0.000 claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 29
- 238000013528 artificial neural network Methods 0.000 claims abstract description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 101100153581 Bacillus anthracis topX gene Proteins 0.000 description 2
- 101150041570 TOP1 gene Proteins 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000013140 knowledge distillation Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000010893 paper waste Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a garbage image fine-grained classification method based on incremental learning, which comprises the following steps of: the method comprises the following steps that firstly, a new and old category garbage image database is constructed; secondly, respectively training a deep convolution feature extraction network and an incremental classifier: firstly, training a depth convolution neural network based on resnet18, namely a resnet18 network, by using the selected old class garbage image data set; removing a full connection layer from the trained resnet18 network to serve as an incremental learning deep convolution feature extraction network; finally, extracting the depth convolution characteristics of the old class garbage images by using a depth convolution characteristic extraction network to serve as a negative class sample data set of the incremental SVM classifier, extracting the depth convolution characteristics of the newly added class garbage images by using the depth convolution characteristic extraction network to serve as a positive class sample data set of the incremental SVM classifier, and training the incremental SVM classifier; and thirdly, establishing a classification increment learning model.
Description
Technical Field
The invention belongs to the field of image classification, and relates to a method for realizing continuously and sharply increased garbage image fine-grained classification by utilizing a deep convolutional neural network and an incremental learning strategy.
Background
Image classification is one of the most basic tasks in the field of computer vision, and related technologies are widely applied to multiple industries such as smart cities, medical diagnosis, meteorological analysis, financial services and the like. With the rapid development of deep learning technology in recent years, image classification based on deep learning is continuously making new breakthroughs. From 2014 VggNet, adopted stacked small convolution kernels[1]2015 GoogleNet proposed an acceptance to widen the network width[2]ResNet proposes cross-connection[3]Until 2017, SeNet obtains the next ImageNet to identify the champion of the game[4]Finally, the image classification TOP-5 error rate in large-scale image datasets has been reduced to 2.251%, exceeding the human average 5.1% error rate.
Although the existing image classification algorithm based on deep learning has high classification accuracy on a large-scale image data set. However, such algorithms are relatively inflexible and can only identify pre-defined image classes in a data set. Classification results cannot be given for new classes not defined in the dataset. In practical applications, many recognition tasks face the problem of a dramatic increase in new classes. For example, in the task of garbage image recognition, although a large number of different types of garbage pictures can be collected before the classification model is built, the types of living goods are huge, and the collection of all types of garbage pictures is time-consuming, labor-consuming and expensive. In addition, the daily necessities are also being pushed out of new, resulting in the increasing of garbage categories. Therefore, how to deal with the identification of new categories is a difficult problem to be solved urgently. If a new deep network recognition algorithm is retrained for a small number of newly added classes plus the previous data set, the consumption of computing resources is large, and the recognition accuracy of the previous classes is possibly reduced. Therefore, the invention provides a method for solving the problem of identifying a small number of newly added categories by adopting an incremental learning mode. The basic idea of incremental learning is to add some auxiliary links to realize learning of new classes on the basis of keeping the original classification network as much as possible.
At present, four classification methods (kitchen garbage, recoverable garbage, harmful garbage and other garbage) are generally adopted for garbage classification, if the recoverable garbage can be further classified into waste paper, metal, plastic, glass and the like, if the classified garbage is classified according to four classifications, the classified garbage is not suitable for incremental learning training because no new classification is added, and the whole network model needs to be retrained. Therefore, the invention classifies the fine granularity of the garbage images, and refines the four categories into the garbage of multiple categories, thereby being convenient for adapting to incremental learning training on one hand, the fine classification still belongs to one of the four categories while the incremental categories are classified, and on the other hand, the fine classification result can be directly used for garbage fine granularity sorting application to assist the garbage resource recycling.
In the field of incremental learning, scholars proposed some more effective methods, 2016 (L iHoiem) proposed L WF method[5]The idea is to introduce a knowledge distillation loss function, preserving as much as possible the parameters related to the old category. 2016 Rusu proposed a progressive neural network[6]A large number of pre-trained models are retained. In 2017 Shin retained a set of generators for previous tasks in terms of generating a countering network framework, and then learned parameters that could accommodate a mixed set of both real data for the new task and replay data for the previous task[7]. Rebuffi in 2017 proposes a method based on example sample sets, which is used for creating example sets for new categories and pruning existing example sets[8]。
Although these approaches implement incremental learning to some extent. However, the classification accuracy of the methods based on model parameter retention and the like is poor on the new category, and although the accuracy of the methods based on the example set is high, a large amount of storage space needs to be reserved in advance, so that a method capable of ensuring the classification accuracy of the old category and simultaneously realizing rapid identification of a small amount of newly added categories is urgently needed, and the problem of fine-grained classification of the spam images is solved.
[1]Simonyan,Karen,and Andrew Zisserman.Very deep convolutionalnetworks for large-scale image recognition.arXiv preprint arXiv:1409.15562014.
[2]Szegedy C,Liu W,Jia Y,et al.Going deeper with convolutions[C].Proceedings of the IEEE conference on computer vision andpatternrecognition.2015:1-9.
[3]He K,Zhang X,Ren S,et al.Deep residual learning for imagerecognition[C].Proceedings of the IEEE conference on computer vision andpattern recognition.2016:770-778.
[4]Hu J,Shen L,Sun G.Squeeze-and-excitation networks[C].Proceedingsof the IEEE conference on computer vision and pattern recognition.2018:7132-7141.
[5]Li Z,Hoiem D.Learning without forgetting[J].IEEE transactions onpattern analysis and machine intelligence,2017,40(12):2935-2947.
[6]Rusu A A,Rabinowitz N C,Desjardins G,et al.Progressive neuralnetworks[J].arXiv preprint arXiv:1606.04671,2016.
[7]Shin H,Lee J K,Kim J,et al.Continual learning with deep generativereplay[C].Advances in Neural Information Processing Systems.2017:2990-2999.
[8]Rebuffi S A,Kolesnikov A,Sperl G,et al.icarl:Incrementalclassifier and representation learning[C].Proceedings of the IEEE conferenceon Computer Vision and Pattern Recognition.2017:2001-2010.
Disclosure of Invention
The invention provides a garbage image fine-grained classification method combining a deep convolutional neural network and an incremental learning strategy, aiming at the difficult problem of classification and identification of continuously added garbage types. The method can quickly realize accurate classification of a small amount of newly added class garbage images, and simultaneously ensures high accuracy of classification of a large amount of old class garbage images, and the technical scheme is as follows:
a spam image fine-grained classification method based on incremental learning comprises the following steps:
firstly, constructing a garbage image database of new and old categories: and selecting the multiple classes of garbage images as an old class data set, and using the residual few classes of garbage images as a new class data set, wherein the old class data set is used for training the feature extraction network, and the new class data set is used for incremental learning test.
Secondly, respectively training a deep convolution feature extraction network and an incremental classifier: firstly, training a depth convolution neural network based on resnet18, namely a resnet18 network, by using the old class garbage image data set selected in the first step; then, removing the full connection layer of the trained resnet18 network to serve as a deep convolution feature extraction network for incremental learning; finally, extracting the depth convolution characteristics of the old class garbage images by using a depth convolution characteristic extraction network to serve as a negative class sample data set of the incremental SVM classifier, extracting the depth convolution characteristics of the newly added class garbage images by using the depth convolution characteristic extraction network to serve as a positive class sample data set of the incremental SVM classifier, and training the incremental SVM classifier;
thirdly, establishing a classification increment learning model: firstly, performing depth convolution feature extraction network processing on a garbage image to be detected; then, inputting the obtained convolution characteristics into an incremental SVM classifier and a full-connection layer judged by an old class respectively to obtain prediction probabilities belonging to the new and old classes respectively; and finally, identifying the garbage category of the image to be detected based on the probability prediction of the new category of the SVM and the probability prediction of the old category of the convolutional neural network.
Preferably, the method of the second step is as follows:
(1) training a deep convolution feature extraction network: training a depth convolution neural network based on resnet18 by using the old class garbage image data set selected in the first step; removing the last full connection layer, and reserving the first 17 convolutional layers of the resnet18 network as a deep convolutional feature extraction network of the incremental learning network;
the method comprises the steps of firstly, extracting the deep convolution characteristics of old class garbage images by using a trained deep convolution characteristic extraction network to serve as a negative class sample data set of an incremental SVM classifier, extracting the deep convolution characteristics of the old class garbage images by using the deep convolution characteristics extraction network to serve as a positive class sample data set of the incremental SVM classifier, storing the deep convolution characteristics corresponding to each class by using a dit structure, obtaining a positive and negative class sample data set of the incremental SVM classifier after L2 normalization, training one-to-many newly added class SVM classifiers according to the positive and negative class sample data sets, and finally, optimizing SVM parameters by using a grid search method to obtain a plurality of trained new class SVM classifiers so as to obtain the incremental SVM classifier.
The garbage image fine-grained increment classification algorithm combining the deep convolutional neural network and the support vector increment learning is higher in precision than a classical increment learning algorithm based on fine-tuning model parameters and an old class image generator for identifying a large number of old class garbage images. In addition, the algorithm only stores the deep convolution characteristics of the low-dimensional sample image for model classification, so that the storage capacity requirement of the algorithm is far smaller than that of the traditional incremental learning algorithm based on the example set, and the training rapidity is also ensured.
Drawings
FIG. 1 is a diagram of a deep convolution feature extraction network
FIG. 2 incremental learning model flow diagram
Detailed Description
In order to make the technical scheme of the invention clearer, the invention is further explained below by combining the attached drawings. The invention is realized by the following steps:
first, a data set is prepared.
(1) Picture data and tag data are prepared.
Constructing a garbage image database of new and old categories: hua is a cloud garbage classification data set which comprises 19459 pictures of 43 types. And dividing the 43 classes of data sets into two categories, namely new and old classes, randomly selecting 30 classes as the old class of data sets to train the feature extraction network, and using the remaining 13 classes as the new class of data sets to be used for the incremental learning effect test.
(2) And preprocessing the image.
The image dataset is preprocessed and the dataset mean and standard deviation are calculated for normalization. Dividing the 30 classes of old class data sets into a training set and a verification set according to a ratio of 9:1, randomly cutting and scaling the images to 224 × 224 for the training set, and randomly and horizontally overturning the image augmentation data sets with a probability of 0.5; it is scaled to 256 × 256 size for the validation set and then clipped to 224 × 224 for the center.
And secondly, training a deep convolution feature extraction network and an incremental SVM classifier in sequence. First, the resnet18 network has 17 convolutional layers and 1 full link layer, is widely used in the field of image classification, and trains a depth convolutional neural network based on resnet18, called a resnet18 network, using the old class image data set selected in the first step (1). And secondly, removing a full connection layer from the trained resnet18 network to serve as a deep convolution feature extraction network of the incremental learning network. And finally, extracting the depth convolution characteristics of all the old class garbage images by using a depth convolution characteristic extraction network to serve as a negative class sample set of the incremental SVM classifier, extracting the depth convolution characteristics of the newly added class garbage images by using the depth convolution characteristic extraction network to serve as a positive class sample set, and training the incremental SVM classifier. The specific method comprises the following steps:
(1) training a deep convolution feature extraction network: firstly, training a depth convolution neural network based on resnet18 by using 30 old category garbage image datasets selected in the first step (1), wherein a cross entropy loss function is adopted as a loss function, an SGD is adopted as an optimizer, the learning rate is set to be 0.1, the momentum coefficient is set to be 0.9, and the weight attenuation is set to be 0.0001. The invention adopts a method for dynamically adjusting the learning rate based on the parameter test index of the optimizer, and when the parameter test index of the SGD optimizer is not changed within 10 patestists, the reduction ratio is set to be 0.1, so that the method can improve the network performance. The batch _ size is set to 128, and the trained network parameters are saved after 100 epochs. Selecting top1 and top5 for the evaluation indexes, wherein top1 refers to the classification label corresponding to the maximum prediction probability, and top5 refers to the probability of correct prediction in the top5 of the prediction probability; then, the present invention modifies the resnet18 network, leaving the first 17 layers of resnet18 as the deep convolution feature extraction network, and removing the last full connectivity layer, as shown in fig. 1.
(2) The method comprises the steps of training an incremental SVM classifier, firstly, extracting deep convolution characteristics of 30 old class garbage images by using a trained deep convolution characteristic extraction network, storing deep convolution characteristics and probability prediction information corresponding to each class by using a ditt structure, obtaining convolution characteristics after neural network extraction, compared with an incremental learning method based on a sample set, needing less storage space, solving the problem that the class number acceleration rate in incremental learning is far larger than that of the required storage space, carrying out L normalization processing to obtain a negative class sample data set of the incremental SVM classifier, extracting the deep convolution characteristics of 13 newly added class images by using the trained deep convolution characteristics, storing the deep convolution characteristics, carrying out L normalization processing to obtain a positive class sample data set of the incremental SVM classifier, then training a one-to-many newly added class SVM classifier according to the positive and negative class sample data sets, wherein an SVM support vector machine punishs decision boundaries in a training data set to enable positive and negative class geometric intervals on a characteristic space to be maximum, the positive and negative class geometric intervals on the characteristic space to be suitable for solving the problem of the decision boundary, wherein the data which is used for learning the incremental SVM, the invention, the method is set as a grid search algorithm for solving the problem of the SVC-based on the SVC classification by using a fixed SVM classification method, and a grid learning algorithm, and a grid learning method for improving the SVC classification by using a fixed SVM classification method for improving the SVC classification method, wherein the SVC classification method for solving the SVC classification is set.
And thirdly, establishing a garbage image classification model based on incremental learning. And (3) comprehensively based on the old class probability prediction and the SVM new class probability prediction of the convolutional neural network stored in the second step (2), and identifying the garbage class of the image to be detected. The specific process is as follows: firstly, performing depth convolution feature extraction network processing on a garbage image to be detected; then, inputting the obtained convolution characteristics into an incremental SVM classifier and a full-connection layer judged by an old class respectively to obtain prediction probabilities belonging to the new and old classes respectively; and finally, processing by a softmax layer to judge the category of the garbage image to be detected. The incremental learning model flowchart is shown in fig. 2.
Claims (2)
1. A spam image fine-grained classification method based on incremental learning comprises the following steps:
firstly, constructing a garbage image database of new and old categories: and selecting the multiple classes of garbage images as an old class data set, and using the residual few classes of garbage images as a new class data set, wherein the old class data set is used for training the feature extraction network, and the new class data set is used for incremental learning test.
Secondly, respectively training a deep convolution feature extraction network and an incremental classifier: firstly, training a depth convolution neural network based on resnet18, namely a resnet18 network, by using the old class garbage image data set selected in the first step; then, removing the full connection layer of the trained resnet18 network to serve as a deep convolution feature extraction network for incremental learning; finally, extracting the depth convolution characteristics of the old class garbage images by using a depth convolution characteristic extraction network to serve as a negative class sample data set of the incremental SVM classifier, extracting the depth convolution characteristics of the newly added class garbage images by using the depth convolution characteristic extraction network to serve as a positive class sample data set of the incremental SVM classifier, and training the incremental SVM classifier;
thirdly, establishing a classification increment learning model: firstly, performing depth convolution feature extraction network processing on a garbage image to be detected; then, inputting the obtained convolution characteristics into an incremental SVM classifier and a full-connection layer judged by an old class respectively to obtain prediction probabilities belonging to the new and old classes respectively; and finally, identifying the garbage category of the image to be detected based on the probability prediction of the new category of the SVM and the probability prediction of the old category of the convolutional neural network.
2. The method of claim 1, wherein the second step is performed by:
(1) training a deep convolution feature extraction network: training a depth convolution neural network based on resnet18 by using the old class garbage image data set selected in the first step; removing the last full connection layer, and reserving the first 17 convolutional layers of the resnet18 network as a deep convolutional feature extraction network of the incremental learning network;
(2) the method comprises the steps of firstly, extracting the deep convolution characteristics of old class garbage images by using a trained deep convolution characteristic extraction network to serve as a negative class sample data set of an incremental SVM classifier, extracting the deep convolution characteristics of the old class garbage images by using the deep convolution characteristics extraction network to serve as a positive class sample data set of the incremental SVM classifier, storing the deep convolution characteristics corresponding to each class by using a dit structure, obtaining a positive and negative class sample data set of the incremental SVM classifier after L2 normalization, training one-to-many newly added class SVM classifiers according to the positive and negative class sample data sets, and finally, optimizing SVM parameters by using a grid search method to obtain a plurality of trained new class SVM classifiers so as to obtain the incremental SVM classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010198397.5A CN111488917A (en) | 2020-03-19 | 2020-03-19 | Garbage image fine-grained classification method based on incremental learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010198397.5A CN111488917A (en) | 2020-03-19 | 2020-03-19 | Garbage image fine-grained classification method based on incremental learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111488917A true CN111488917A (en) | 2020-08-04 |
Family
ID=71794445
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010198397.5A Pending CN111488917A (en) | 2020-03-19 | 2020-03-19 | Garbage image fine-grained classification method based on incremental learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488917A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111756760A (en) * | 2020-06-28 | 2020-10-09 | 深圳壹账通智能科技有限公司 | User abnormal behavior detection method based on integrated classifier and related equipment |
CN112633335A (en) * | 2020-12-10 | 2021-04-09 | 长春理工大学 | Garbage classification method and garbage can |
CN112686275A (en) * | 2021-01-04 | 2021-04-20 | 上海交通大学 | Knowledge distillation-fused generation playback frame type continuous image recognition system and method |
CN112707058A (en) * | 2020-12-10 | 2021-04-27 | 广东芯盾微电子科技有限公司 | Detection method, system, device and medium for standard actions of kitchen waste |
CN113240035A (en) * | 2021-05-27 | 2021-08-10 | 杭州海康威视数字技术股份有限公司 | Data processing method, device and equipment |
CN113591913A (en) * | 2021-06-28 | 2021-11-02 | 河海大学 | Picture classification method and device supporting incremental learning |
CN113762304A (en) * | 2020-11-26 | 2021-12-07 | 北京京东乾石科技有限公司 | Image processing method, image processing device and electronic equipment |
CN114529819A (en) * | 2022-02-23 | 2022-05-24 | 合肥学院 | Household garbage image recognition method based on knowledge distillation learning |
CN114937199A (en) * | 2022-07-22 | 2022-08-23 | 山东省凯麟环保设备股份有限公司 | Garbage classification method and system based on discriminant feature enhancement |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258217A (en) * | 2013-05-15 | 2013-08-21 | 中国科学院自动化研究所 | Pedestrian detection method based on incremental learning |
CN105184322A (en) * | 2015-09-14 | 2015-12-23 | 哈尔滨工业大学 | Multi-temporal image classification method based on incremental integration learning |
CN107358257A (en) * | 2017-07-07 | 2017-11-17 | 华南理工大学 | Under a kind of big data scene can incremental learning image classification training method |
CN109492765A (en) * | 2018-11-01 | 2019-03-19 | 浙江工业大学 | A kind of image Increment Learning Algorithm based on migration models |
CN109543838A (en) * | 2018-11-01 | 2019-03-29 | 浙江工业大学 | A kind of image Increment Learning Algorithm based on variation self-encoding encoder |
CN110162018A (en) * | 2019-05-31 | 2019-08-23 | 天津开发区精诺瀚海数据科技有限公司 | The increment type equipment fault diagnosis method that knowledge based distillation is shared with hidden layer |
WO2019193462A1 (en) * | 2018-04-02 | 2019-10-10 | King Abdullah University Of Science And Technology | Incremental learning method through deep learning and support data |
-
2020
- 2020-03-19 CN CN202010198397.5A patent/CN111488917A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258217A (en) * | 2013-05-15 | 2013-08-21 | 中国科学院自动化研究所 | Pedestrian detection method based on incremental learning |
CN105184322A (en) * | 2015-09-14 | 2015-12-23 | 哈尔滨工业大学 | Multi-temporal image classification method based on incremental integration learning |
CN107358257A (en) * | 2017-07-07 | 2017-11-17 | 华南理工大学 | Under a kind of big data scene can incremental learning image classification training method |
WO2019193462A1 (en) * | 2018-04-02 | 2019-10-10 | King Abdullah University Of Science And Technology | Incremental learning method through deep learning and support data |
CN109492765A (en) * | 2018-11-01 | 2019-03-19 | 浙江工业大学 | A kind of image Increment Learning Algorithm based on migration models |
CN109543838A (en) * | 2018-11-01 | 2019-03-29 | 浙江工业大学 | A kind of image Increment Learning Algorithm based on variation self-encoding encoder |
CN110162018A (en) * | 2019-05-31 | 2019-08-23 | 天津开发区精诺瀚海数据科技有限公司 | The increment type equipment fault diagnosis method that knowledge based distillation is shared with hidden layer |
Non-Patent Citations (3)
Title |
---|
CASTRO F M: "《End-to-End Incremental Learning》", 《PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION(ECCV)》 * |
WU Y: "《Large Scale Incremental Learning》", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
黄伟楠: "《基于典型样本的卷积神经网络增量学习研究》", 《电子测量技术》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111756760A (en) * | 2020-06-28 | 2020-10-09 | 深圳壹账通智能科技有限公司 | User abnormal behavior detection method based on integrated classifier and related equipment |
CN113762304B (en) * | 2020-11-26 | 2024-02-06 | 北京京东乾石科技有限公司 | Image processing method, image processing device and electronic equipment |
CN113762304A (en) * | 2020-11-26 | 2021-12-07 | 北京京东乾石科技有限公司 | Image processing method, image processing device and electronic equipment |
CN112707058A (en) * | 2020-12-10 | 2021-04-27 | 广东芯盾微电子科技有限公司 | Detection method, system, device and medium for standard actions of kitchen waste |
CN112707058B (en) * | 2020-12-10 | 2022-04-08 | 广东芯盾微电子科技有限公司 | Detection method, system, device and medium for standard actions of kitchen waste |
CN112633335A (en) * | 2020-12-10 | 2021-04-09 | 长春理工大学 | Garbage classification method and garbage can |
CN112686275A (en) * | 2021-01-04 | 2021-04-20 | 上海交通大学 | Knowledge distillation-fused generation playback frame type continuous image recognition system and method |
CN113240035A (en) * | 2021-05-27 | 2021-08-10 | 杭州海康威视数字技术股份有限公司 | Data processing method, device and equipment |
CN113591913A (en) * | 2021-06-28 | 2021-11-02 | 河海大学 | Picture classification method and device supporting incremental learning |
CN113591913B (en) * | 2021-06-28 | 2024-03-29 | 河海大学 | Picture classification method and device supporting incremental learning |
CN114529819A (en) * | 2022-02-23 | 2022-05-24 | 合肥学院 | Household garbage image recognition method based on knowledge distillation learning |
CN114529819B (en) * | 2022-02-23 | 2024-08-27 | 合肥学院 | Household garbage image recognition method based on knowledge distillation learning |
CN114937199A (en) * | 2022-07-22 | 2022-08-23 | 山东省凯麟环保设备股份有限公司 | Garbage classification method and system based on discriminant feature enhancement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111488917A (en) | Garbage image fine-grained classification method based on incremental learning | |
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN114937151B (en) | Lightweight target detection method based on multiple receptive fields and attention feature pyramid | |
CN111079639B (en) | Method, device, equipment and storage medium for constructing garbage image classification model | |
US20210042580A1 (en) | Model training method and apparatus for image recognition, network device, and storage medium | |
Huang et al. | Naive Bayes classification algorithm based on small sample set | |
CN109993100B (en) | Method for realizing facial expression recognition based on deep feature clustering | |
CN103366180A (en) | Cell image segmentation method based on automatic feature learning | |
CN110929848A (en) | Training and tracking method based on multi-challenge perception learning model | |
CN104809469A (en) | Indoor scene image classification method facing service robot | |
CN110751027B (en) | Pedestrian re-identification method based on deep multi-instance learning | |
CN112733936A (en) | Recyclable garbage classification method based on image recognition | |
CN110516098A (en) | Image labeling method based on convolutional neural networks and binary coding feature | |
CN112037228A (en) | Laser radar point cloud target segmentation method based on double attention | |
CN107133640A (en) | Image classification method based on topography's block description and Fei Sheer vectors | |
CN111026870A (en) | ICT system fault analysis method integrating text classification and image recognition | |
CN113010705A (en) | Label prediction method, device, equipment and storage medium | |
CN111461175A (en) | Label recommendation model construction method and device of self-attention and cooperative attention mechanism | |
CN110765285A (en) | Multimedia information content control method and system based on visual characteristics | |
Luan et al. | Sunflower seed sorting based on convolutional neural network | |
CN110008365A (en) | A kind of image processing method, device, equipment and readable storage medium storing program for executing | |
CN109344309A (en) | Extensive file and picture classification method and system are stacked based on convolutional neural networks | |
CN108898157B (en) | Classification method for radar chart representation of numerical data based on convolutional neural network | |
Sang et al. | Image recognition based on multiscale pooling deep convolution neural networks | |
Lin et al. | A 3D neuronal morphology classification approach based on convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200804 |