CN115880524B - Small sample image classification method based on Markov distance loss characteristic attention network - Google Patents
Small sample image classification method based on Markov distance loss characteristic attention network Download PDFInfo
- Publication number
- CN115880524B CN115880524B CN202211460720.7A CN202211460720A CN115880524B CN 115880524 B CN115880524 B CN 115880524B CN 202211460720 A CN202211460720 A CN 202211460720A CN 115880524 B CN115880524 B CN 115880524B
- Authority
- CN
- China
- Prior art keywords
- feature
- attention network
- class
- training set
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000012549 training Methods 0.000 claims abstract description 92
- 239000013598 vector Substances 0.000 claims abstract description 68
- 238000012360 testing method Methods 0.000 claims abstract description 16
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 238000012935 Averaging Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000007547 defect Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 17
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a small sample image classification method based on a Markov distance loss feature attention network, which comprises the steps of carrying out feature extraction on a small sample image data set with a label to obtain a training set; inputting the samples in the training set into a characteristic attention network for training to obtain a trained characteristic attention network; the training set vector obtains the class representation by averaging after passing through the trained characteristic attention network; and carrying out feature extraction on the small sample image dataset without the label to obtain a test set, inputting samples in the test set into a trained feature attention network to obtain new feature vectors, and calculating the mahalanobis distance between the new feature vectors and class representations, wherein the class corresponding to the smallest mahalanobis distance represents classification prediction of the samples. The invention overcomes the defect of insufficient knowledge of the importance of the trunk model to the new task features, and before the mahalanobis distance classification is used, different weights are given to the features through the feature attention network, so that the intra-class distance is reduced and the intra-class distance is enlarged, thereby improving the classification performance.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a small sample image classification method based on a Markov distance loss characteristic attention network.
Background
In recent years, deep learning has made tremendous progress in numerous other tasks including image classification, object detection, and segmentation. However, advanced performance of deep learning models depends largely on the scale of the network. Training the network with a much smaller number of samples than the parameters can result in catastrophic overfitting. In many cases, however, it is costly to collect enough data to perform efficient model training. Thus, data scarcity greatly limits the applicability of current vision systems to effectively learn new vision concepts. In contrast, the human visual system can identify new classes with little tag data, and learning to summarize new classes using a small amount of tag data is of great significance to the development of deep learning, general artificial intelligence, and the like.
In small sample learning, since there are very few labeled samples of a new task, and in addition, the training domain is usually not the same as the new task domain, which makes it difficult to adapt the trunk feature distribution obtained by training to the new task, and many methods have been proposed to adapt the features extracted by the trunk to the new task. For example, the PT (Power Transform) model scales the features to the same scale, so that the large variance features cannot be dominant, and the influence caused by the difference between the training domain and the new task domain is reduced; the CNAPS (Conditional Neural Adaptive Processes) model adapts the feature vectors obtained by the backbone to the new domain in the backbone network by affine transformation. However, the above method either directly reduces the large variance feature and does not use the feature distribution information of the labeled sample in the new task, or simply affine transformation has difficulty in sufficiently affecting the extracted feature of the network. How to fully utilize the training set to adapt the backbone network to the new task is still a technical problem that needs to be solved urgently in the art.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the technical defects in the prior art, and the invention provides a small sample image classification method based on a Markov distance loss feature attention network, which overcomes the defect that a backbone model has insufficient knowledge on the importance of new task features, and the features are given different weights through the feature attention network before the Markov distance classification, so that the intra-class distance is reduced and the intra-class distance is enlarged, thereby improving the classification performance.
In order to solve the technical problems, the invention provides a small sample image classification method based on a Markov distance loss feature attention network, which comprises the following steps:
extracting features of a small sample image dataset with a label to obtain a training set, wherein the training set comprises feature vectors of samples and corresponding categories of the feature vectors;
inputting the samples in the training set to a feature attention network for training to obtain a trained feature attention network, and averaging feature vectors in the training set after passing through the trained feature attention network to obtain a class representation;
and carrying out feature extraction on the small sample image dataset without the label to obtain a test set, inputting samples in the test set into a trained feature attention network to obtain a new feature vector, and calculating the mahalanobis distance between the new feature vector and the class representation, wherein the class corresponding to the minimum mahalanobis distance represents the classification prediction of the samples.
In one embodiment of the present invention, feature extraction is performed on a labeled small sample image dataset to obtain a training set, including:
inputting the small sample image dataset with the label into a pre-trained trunk model to obtain a training set Wherein N τ represents the number of training sets, X i represents the feature vector of the ith image, and y i represents the class corresponding to the feature vector, and the trunk model is a classification network with the last classification head removed.
In one embodiment of the invention, the feature attention network comprises d convolution kernels of size 1 x 1, each convolution kernel corresponding to the weight of the corresponding feature, the output of the feature attention network being the same as its input dimension.
In one embodiment of the invention, the loss function training the feature attention network is:
Wherein intraSim denotes a similarity, and interSim denotes an inter-class similarity.
In one embodiment of the present invention, the similarity intraSim and the inter-class similarity interSim are defined as follows:
Where class representation r c represents the average of vectors obtained by inputting a c-th class of training set into the training-completed feature attention network, o c,i represents a c-th class i vector output by the feature attention network, o t,i represents a t-th class i vector output by the feature attention network, Representing the similarity of rc to o t,i obtained by the mahalanobis distance.
In one embodiment of the invention, the expression of the mahalanobis distance function maha (o t,i,rc) is as follows:
In the method, in the process of the invention, Representing a class c hybrid covariance matrix of the training set, the expression of which is represented by the class c covariance matrix of the training setCo-determined with the covariance matrix Σ τ of the training set, namely:
In the weights E is an identity matrix, covarianceAnd
In addition, the invention also provides a small sample image classification system based on the Markov distance loss feature attention network, which comprises:
the data preprocessing module is used for extracting features of the small sample image dataset with the label to obtain a training set, and the training set comprises feature vectors of the sample and corresponding categories of the feature vectors;
The feature attention network training module is used for inputting the samples in the training set into a feature attention network to train to obtain a trained feature attention network, and the feature vectors in the training set are averaged to obtain a class representation after passing through the trained feature attention network;
The classification prediction module is used for extracting features of a small sample image dataset without a label to obtain a test set, inputting samples in the test set into a trained feature attention network to obtain new feature vectors, and calculating the mahalanobis distance between the new feature vectors and class representations, wherein the class corresponding to the smallest mahalanobis distance represents classification prediction of the sample.
In one embodiment of the present invention, in the feature attention network training module, a loss function for training the feature attention network is:
Wherein intraSim denotes a similarity, and interSim denotes an inter-class similarity.
In one embodiment of the present invention, the similarity intraSim and the inter-class similarity interSim are defined as follows:
Where class representation r c represents the average of vectors obtained by inputting a c-th class of training set into the training-completed feature attention network, o c,i represents a c-th class i vector output by the feature attention network, o t,i represents a t-th class i vector output by the feature attention network, Representing the similarity of rc to o t,i obtained by the mahalanobis distance.
In one embodiment of the invention, the expression of the mahalanobis distance function maha (o t,i,rc) is as follows:
In the method, in the process of the invention, Representing a class c hybrid covariance matrix of the training set, the expression of which is represented by the class c covariance matrix of the training setCo-determined with the covariance matrix Σ τ of the training set, namely:
In the weights E is an identity matrix, covarianceAnd
Compared with the prior art, the technical scheme of the invention has the following advantages:
The small sample image classification method based on the Markov distance loss feature attention network overcomes the defect that a trunk model has insufficient knowledge on the importance of new task features, and before the Markov distance classification, features are given different weights through the feature attention network, so that the intra-class distance is reduced and the intra-class distance is enlarged, and the classification performance is improved.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings, in which
Fig. 1 is a flow chart of a small sample image classification method based on a mahalanobis distance loss feature attention network according to the present invention.
Fig. 2 is a schematic structural diagram of a small sample image classification system based on a mahalanobis distance loss feature attention network according to the present invention.
Wherein, the reference numerals illustrate: 10. a data preprocessing module; 20. a feature attention network training module; 30. and a classification prediction module.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
Referring to fig. 1, the small sample image classification method based on the mahalanobis distance loss characteristic attention network of the invention comprises the following steps:
S1: extracting features of a small sample image dataset with a label to obtain a training set, wherein the training set comprises feature vectors of samples and corresponding categories of the feature vectors;
S2: inputting the samples in the training set to a feature attention network for training to obtain a trained feature attention network, and averaging feature vectors in the training set after passing through the trained feature attention network to obtain a class representation;
s3: and carrying out feature extraction on the small sample image dataset without the label to obtain a test set, inputting samples in the test set into a trained feature attention network to obtain a new feature vector, and calculating the mahalanobis distance between the new feature vector and the class representation, wherein the class corresponding to the minimum mahalanobis distance represents the classification prediction of the samples.
In step S1, feature extraction is performed on the labeled small sample image dataset to obtain a training set, which includes inputting the labeled small sample image dataset into a pre-trained trunk model to obtain a training setWherein N τ represents the number of training sets, X i represents the feature vector of the ith image, and y i represents the class corresponding to the feature vector, and the trunk model is a classification network with the last classification head removed.
Wherein in step S2 the feature attention network comprises d convolution kernels of size 1 x 1, each convolution kernel corresponding to the weight of the respective feature, the output of the feature attention network being identical to its input dimension. Training the feature attention network for a loss function of:
Wherein intraSim denotes a similarity, and interSim denotes an inter-class similarity.
The definition of the similarity intraSim and the similarity interSim between classes is as follows:
Where class representation r c represents the average of vectors obtained by inputting a c-th class of training set into the training-completed feature attention network, o c,i represents a c-th class i vector output by the feature attention network, o t,i represents a t-th class i vector output by the feature attention network, The similarity of r c to o t,i obtained by the mahalanobis distance is shown. Wherein the expression of the mahalanobis distance function maha (o t,i,rc) is as follows:
In the method, in the process of the invention, Representing a class c hybrid covariance matrix of the training set, the expression of which is represented by the class c covariance matrix of the training setCo-determined with the covariance matrix Σ τ of the training set, namely:
In the weights E is an identity matrix, covarianceAnd
The small sample image classification method based on the mahalanobis distance loss characteristic attention network provided by the embodiment of the invention is described in detail below through a specific application scene.
The invention proceeds on a small sample image classified public dataset CIFAR-FS. The CIFAR-FS dataset contains 100 types of images in total, each type consists of 600 images, has higher average inter-class similarity and lower resolution of 32×32, and has sufficient challenges for a small sample classification method. In the example, the first 60 classes of data of CIFAR-FS dataset are used as the pre-trained backbone model, here WIDERESNET is used as the backbone model. According to the invention, the CIFAR-FS data set is used as new task learning, and a 5-way5-shot mode is adopted for test. The specific implementation steps are as follows:
S1: firstly, inputting training images of class 40 after CIFAR-FS data set into a pre-trained trunk model to obtain a training set Where N τ is the number of training sets, i.e., N τ=|Dτ|=5×5=25,Xi is a feature vector of the ith image, and let d=600 and y i be the class corresponding to the feature vector.
S2: the feature attention network includes 600 convolution kernels of size 1 x 1, with each convolution kernel matching the weight of the corresponding feature. The output of the feature attention network is the same as its input dimension. To train the feature attention network, the loss function is defined as follows:
wherein intraSim represents similarity, and interSim represents similarity between classes. The above-described loss function is expected to obtain a larger similarity between classes and a smaller similarity between classes by learning. intraSim and interSim are defined as follows:
Where class representation r c represents the average of vectors obtained by inputting a c-th class of training set into the training-completed feature attention network, o c,i represents a c-th class i vector output by the feature attention network, o t,i represents a t-th class i vector output by the feature attention network, The expression for the mahalanobis distance function maha (o t,i,rc) is as follows, representing the similarity of r c to o t,i by the mahalanobis distance:
Wherein the method comprises the steps of Representing a class c hybrid covariance matrix of the training set, the expression of which is represented by the class c covariance matrix of the training setIs determined jointly with the covariance matrix Σ τ of the training set, i.e.
Wherein the weights areE is an identity matrix, covarianceAnd
The training times of the feature attention network reach 15 rounds and then stop, and a new feature vector set can be obtainedAnd classes of each class represent r c, c=1, l,5.
S3: inputting a certain test image of CIFAR-FS post 40 classes into a pre-trained trunk model so as to extract a characteristic vector X thereof; and inputting the feature vector into a trained feature attention network to obtain a new feature vector o, and solving the mahalanobis distance between the new feature vector o and the class representation r c, wherein the class corresponding to the smallest distance is the prediction of the model on the sample.
The effect of the invention can be verified by the following experiment:
The present method compares to PT, CNAPS and CNAPS on the same dataset using the mahalanobis distance metric SIMPLE CNAPS. From the results in table 1, it can be seen that the method according to the present invention obtains better performance in classification Accuracy (ACC) than PT.
TABLE 1
CNAPS | Simple CNAPS | PT | The invention is that | |
ACC | 61.57 | 74.36 | 90.68 | 96.59 |
The following describes a small sample image classification system based on a mahalanobis distance loss feature attention network according to an embodiment of the present invention, and the small sample image classification system based on a mahalanobis distance loss feature attention network described below and the small sample image classification method based on a mahalanobis distance loss feature attention network described above may be referred to correspondingly.
Referring to fig. 2, the embodiment of the invention further provides a small sample image classification system based on a mahalanobis distance loss feature attention network, which comprises:
the data preprocessing module 10 is used for extracting features of a small sample image dataset with a label to obtain a training set, wherein the training set comprises feature vectors of samples and corresponding categories of the feature vectors;
The feature attention network training module 20 is configured to input the sample in the training set to a feature attention network for training, obtain a trained feature attention network, and average the feature vector in the training set after passing through the trained feature attention network to obtain a class representation;
the classification prediction module 30 is configured to perform feature extraction on a small sample image dataset without a label, obtain a test set, input samples in the test set to a trained feature attention network, obtain a new feature vector, calculate a mahalanobis distance between the new feature vector and a class representation, where the class corresponding to the minimum mahalanobis distance represents classification prediction on the sample.
In one embodiment of the present invention, in the feature attention network training module 20, the loss function of training the feature attention network is:
Wherein intraSim denotes a similarity, and interSim denotes an inter-class similarity.
In one embodiment of the present invention, the similarity intraSim and the inter-class similarity interSim are defined as follows:
Where class representation r c represents the average of vectors obtained by inputting a c-th class of training set into the training-completed feature attention network, o c,i represents a c-th class i vector output by the feature attention network, o t,i represents a t-th class i vector output by the feature attention network, The similarity of r c to o t,i obtained by the mahalanobis distance is shown.
In one embodiment of the invention, the expression of the mahalanobis distance function maha (o t,i,rc) is as follows:
In the method, in the process of the invention, Representing a class c hybrid covariance matrix of the training set, the expression of which is represented by the class c covariance matrix of the training setCo-determined with the covariance matrix Σ τ of the training set, namely:
In the weights E is an identity matrix, covarianceAnd
The small sample image classification system based on the mahalanobis distance loss feature attention network of the present embodiment is used to implement the small sample image classification method based on the mahalanobis distance loss feature attention network, so that the detailed description of the system can be seen in the example section of the small sample image classification method based on the mahalanobis distance loss feature attention network in the foregoing, so that the detailed description of the small sample image classification method based on the mahalanobis distance loss feature attention network can be referred to the description of the examples of the corresponding sections, and will not be further described herein.
In addition, since the small sample image classification system based on the mahalanobis distance loss feature attention network of the present embodiment is used to implement the small sample image classification method based on the mahalanobis distance loss feature attention network, the function of the small sample image classification system corresponds to the function of the method described above, and the description thereof is omitted herein.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.
Claims (4)
1. A small sample image classification method based on a Markov distance loss feature attention network is characterized by comprising the following steps of: the method comprises the following steps:
extracting features of a small sample image dataset with a label to obtain a training set, wherein the training set comprises feature vectors of samples and corresponding categories of the feature vectors;
inputting the samples in the training set to a feature attention network for training to obtain a trained feature attention network, and averaging feature vectors in the training set after passing through the trained feature attention network to obtain a class representation;
Extracting features of a small sample image dataset without a label to obtain a test set, inputting samples in the test set into a trained feature attention network to obtain a new feature vector, and calculating a mahalanobis distance between the new feature vector and a class representation, wherein the class corresponding to the minimum mahalanobis distance represents classification prediction of the samples;
wherein training the feature attention network has a loss function of:
wherein intraSim represents a similarity, and interSim represents an inter-class similarity;
The definition of the similarity intraSim and the similarity interSim between classes is as follows:
Where class representation r c represents the average of vectors obtained by inputting a c-th class of training set into the training-completed feature attention network, o c,i represents a c-th class i vector output by the feature attention network, o t,i represents a t-th class i vector output by the feature attention network, Representing the similarity of r c and o t,i obtained by the mahalanobis distance;
The expression of the mahalanobis distance function maha (o t,i,rc) is as follows:
In the method, in the process of the invention, Representing a class c hybrid covariance matrix of the training set, the expression of which is represented by the class c covariance matrix of the training setCo-determined with the covariance matrix Σ τ of the training set, namely:
In the weights E is an identity matrix, covarianceAnd
2. The small sample image classification method based on mahalanobis distance loss feature attention network of claim 1, wherein: feature extraction is performed on a small sample image dataset with a label to obtain a training set, and the feature extraction comprises the following steps:
inputting the small sample image dataset with the label into a pre-trained trunk model to obtain a training set Wherein N τ represents the number of training sets, X i represents the feature vector of the ith image, and y i represents the class corresponding to the feature vector, and the trunk model is a classification network with the last classification head removed.
3. The small sample image classification method based on mahalanobis distance loss feature attention network of claim 1, wherein: the feature attention network comprises d convolution kernels of size 1 x1, each convolution kernel matching the weight of the corresponding feature, the output of the feature attention network being the same as its input dimension.
4. A small sample image classification system based on mahalanobis distance loss feature attention network, characterized in that: comprising the following steps:
the data preprocessing module is used for extracting features of the small sample image dataset with the label to obtain a training set, and the training set comprises feature vectors of the sample and corresponding categories of the feature vectors;
The feature attention network training module is used for inputting the samples in the training set into a feature attention network to train to obtain a trained feature attention network, and the feature vectors in the training set are averaged to obtain a class representation after passing through the trained feature attention network;
The classification prediction module is used for extracting features of a small sample image dataset without a label to obtain a test set, inputting samples in the test set into a trained feature attention network to obtain new feature vectors, and calculating the mahalanobis distance between the new feature vectors and class representations, wherein the class corresponding to the smallest mahalanobis distance represents classification prediction of the samples;
wherein in the feature attention network training module, the loss function of training the feature attention network is:
wherein intraSim represents a similarity, and interSim represents an inter-class similarity;
The definition of the similarity intraSim and the similarity interSim between classes is as follows:
Where class representation r c represents the average of vectors obtained by inputting a c-th class of training set into the training-completed feature attention network, o c,i represents a c-th class i vector output by the feature attention network, o t,i represents a t-th class i vector output by the feature attention network, Representing the similarity of r c and o t,i obtained by the mahalanobis distance;
The expression of the mahalanobis distance function maha (o t,i,rc) is as follows:
In the method, in the process of the invention, Representing a class c hybrid covariance matrix of the training set, the expression of which is represented by the class c covariance matrix of the training setCo-determined with the covariance matrix Σ τ of the training set, namely:
In the weights E is an identity matrix, covarianceAnd
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211460720.7A CN115880524B (en) | 2022-11-17 | 2022-11-17 | Small sample image classification method based on Markov distance loss characteristic attention network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211460720.7A CN115880524B (en) | 2022-11-17 | 2022-11-17 | Small sample image classification method based on Markov distance loss characteristic attention network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115880524A CN115880524A (en) | 2023-03-31 |
CN115880524B true CN115880524B (en) | 2024-09-06 |
Family
ID=85760456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211460720.7A Active CN115880524B (en) | 2022-11-17 | 2022-11-17 | Small sample image classification method based on Markov distance loss characteristic attention network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115880524B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112836629A (en) * | 2021-02-01 | 2021-05-25 | 清华大学深圳国际研究生院 | Image classification method |
CN114444600A (en) * | 2022-01-28 | 2022-05-06 | 南通大学 | Small sample image classification method based on memory enhanced prototype network |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10796203B2 (en) * | 2018-11-30 | 2020-10-06 | International Business Machines Corporation | Out-of-sample generating few-shot classification networks |
CN109961089B (en) * | 2019-02-26 | 2023-04-07 | 中山大学 | Small sample and zero sample image classification method based on metric learning and meta learning |
US11948347B2 (en) * | 2020-04-10 | 2024-04-02 | Samsung Display Co., Ltd. | Fusion model training using distance metrics |
US20220067582A1 (en) * | 2020-08-27 | 2022-03-03 | Samsung Electronics Co. Ltd. | Method and apparatus for continual few-shot learning without forgetting |
US20220300823A1 (en) * | 2021-03-17 | 2022-09-22 | Hanwen LIANG | Methods and systems for cross-domain few-shot classification |
CN114821722A (en) * | 2022-04-27 | 2022-07-29 | 南京邮电大学 | Improved face recognition system and method based on Mahalanobis distance |
CN115019083A (en) * | 2022-05-11 | 2022-09-06 | 长春理工大学 | Word embedding graph neural network fine-grained graph classification method based on few-sample learning |
-
2022
- 2022-11-17 CN CN202211460720.7A patent/CN115880524B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112836629A (en) * | 2021-02-01 | 2021-05-25 | 清华大学深圳国际研究生院 | Image classification method |
CN114444600A (en) * | 2022-01-28 | 2022-05-06 | 南通大学 | Small sample image classification method based on memory enhanced prototype network |
Also Published As
Publication number | Publication date |
---|---|
CN115880524A (en) | 2023-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108288051B (en) | Pedestrian re-recognition model training method and device, electronic equipment and storage medium | |
CN109993201B (en) | Image processing method, device and readable storage medium | |
CN110135505B (en) | Image classification method and device, computer equipment and computer readable storage medium | |
JP6897749B2 (en) | Learning methods, learning systems, and learning programs | |
CN104966105A (en) | Robust machine error retrieving method and system | |
CN111985333B (en) | Behavior detection method based on graph structure information interaction enhancement and electronic device | |
WO2024060684A1 (en) | Model training method, image processing method, device, and storage medium | |
CN113223037B (en) | Unsupervised semantic segmentation method and unsupervised semantic segmentation system for large-scale data | |
CN114913923A (en) | Cell type identification method aiming at open sequencing data of single cell chromatin | |
CN115424288A (en) | Visual Transformer self-supervision learning method and system based on multi-dimensional relation modeling | |
CN115731422A (en) | Training method, classification method and device of multi-label classification model | |
CN117197904A (en) | Training method of human face living body detection model, human face living body detection method and human face living body detection device | |
CN111694954B (en) | Image classification method and device and electronic equipment | |
CN109409231B (en) | Multi-feature fusion sign language recognition method based on self-adaptive hidden Markov | |
CN115880524B (en) | Small sample image classification method based on Markov distance loss characteristic attention network | |
CN107563287B (en) | Face recognition method and device | |
CN116704208A (en) | Local interpretable method based on characteristic relation | |
CN104778479B (en) | A kind of image classification method and system based on sparse coding extraction | |
CN113177602B (en) | Image classification method, device, electronic equipment and storage medium | |
CN116503896A (en) | Fish image classification method, device and equipment | |
CN116188445A (en) | Product surface defect detection and positioning method and device and terminal equipment | |
CN107341485B (en) | Face recognition method and device | |
Pramadi et al. | Flowers identification using first-order feature extraction and multi-SVM Classifier | |
CN116912921B (en) | Expression recognition method and device, electronic equipment and readable storage medium | |
Xue et al. | Incremental zero-shot learning based on attributes for image classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |