CN109376786A - A kind of image classification method, device, terminal device and readable storage medium storing program for executing - Google Patents
A kind of image classification method, device, terminal device and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN109376786A CN109376786A CN201811284267.2A CN201811284267A CN109376786A CN 109376786 A CN109376786 A CN 109376786A CN 201811284267 A CN201811284267 A CN 201811284267A CN 109376786 A CN109376786 A CN 109376786A
- Authority
- CN
- China
- Prior art keywords
- image
- class
- activation value
- sample
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000004913 activation Effects 0.000 claims abstract description 176
- 238000012549 training Methods 0.000 claims abstract description 97
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 42
- 238000012360 testing method Methods 0.000 claims description 107
- 230000006870 function Effects 0.000 claims description 50
- 238000004590 computer program Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 abstract description 14
- 230000008569 process Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 238000010606 normalization Methods 0.000 description 10
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The present invention is suitable for technical field of image processing, provides image classification method, device, terminal device and readable storage medium storing program for executing, which comprises is trained by known class image to depth convolutional neural networks, obtains network training model;Probability Distribution Model is established respectively to every a kind of sample in the known class image according to the network training model;The activation value of the known class image is corrected according to the probability Distribution Model;The activation value of unknown classification image is obtained according to the activation value of the known class image data;According to the activation value of the known class image and the activation value of the unknown classification image, classify to image.It through the invention can be in practical applications to the rationally accurately classification of the image progress except training collection class in known class image.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image classification method, an image classification device, terminal equipment and a readable storage medium.
Background
The image classification can be realized by extracting, processing and analyzing the feature data of the image through a computer, identifying different targets and objects and classifying the image according to different characteristics.
At present, when image classification is performed, a trained image classification model is used for classifying known image data based on an algorithm of a deep neural network, wherein the trained image classification model is generated according to training image data and test image data in the same class space; or judging the unknown image category according to the correctly classified image sample activation value of a certain category; however, images of unknown classes outside the training set of known image classes cannot be classified reasonably, and the defect of inaccurate image classification exists.
Disclosure of Invention
In view of the above, embodiments of the present invention provide an image classification method, an image classification device, a terminal device, and a readable storage medium, so as to solve the problems in the prior art that, however, images of unknown classes outside a training set of known image classes cannot be reasonably classified, and that image classification is not accurate.
A first aspect of an embodiment of the present invention provides an image classification method, including:
training the deep convolution neural network through the known class image to obtain a network training model;
respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model;
correcting the activation value of the known class image according to the probability distribution model;
acquiring an activation value of an unknown class image according to the activation value of the known class image data;
and classifying the images according to the activation values of the known class images and the activation values of the unknown class images.
In one embodiment, training a deep convolutional neural network through images of known classes to obtain a network training model, including:
dividing the acquired known class images into a training set and a test set;
training the deep convolutional neural network through the images of the training set, testing the classification performance of the deep convolutional neural network through the images of the test set, and outputting a network classification result;
performing supervision operation on the network classification result through a loss function to obtain a supervision operation result;
and adjusting the network parameters of the deep convolutional neural network according to the supervision operation result.
In one embodiment, the establishing of the probability distribution model for each class of samples in the known class images according to the network training model comprises:
obtaining a mean vector of each type of sample in the known type of image;
calculating the distance between each type of sample in the known type of image and the mean vector;
selecting a plurality of input samples from each type of samples according to the distance and a preset proportion;
and estimating model parameters of the probability distribution model corresponding to the input sample category according to the input samples.
In one embodiment, modifying the activation values of the images of the known class according to the probability distribution model comprises:
extracting a first activation value of a test sample in the known class image through the network training model;
selecting a preset number of sample categories from the test samples according to the first activation value;
calculating the probability of the test samples in the sample classes of the preset number according to the probability distribution model corresponding to the sample classes of the preset number and the first activation value of the test samples in the sample classes;
and correcting the first activation values of the test samples in the preset number of sample categories according to the probability to obtain second activation values.
In one embodiment, obtaining the activation value of the unknown class image from the activation value of the known class image data comprises:
calculating the activation value of the unknown class image according to the first activation value and the second activation value of the test samples in the selected preset number of sample classesThe calculation formula is as follows:
wherein,to test the first activation value of the sample,and C is any class C sample in the total class number.
In one embodiment, classifying the image according to the activation value of the known class image and the activation value of the unknown class image comprises:
normalizing the activation value of the image of the known type and the activation value of the image of the unknown type to obtain a new activation value of the image;
selecting a pending class value corresponding to the current test image with the maximum activation value from the new activation values;
judging whether the undetermined class value corresponds to the unknown class value;
if so, refusing to identify the current test image, and judging that the current test image is in an undefined class;
if not, judging whether the activation value corresponding to the current image to be detected is smaller than a preset threshold value;
if so, refusing to identify the current test image, and judging that the current test image is in an undefined class;
if not, judging that the current test image belongs to a known class, and classifying the current test image according to the known class.
A second aspect of an embodiment of the present invention provides an image classification apparatus, including:
the first model acquisition unit is used for training the deep convolution neural network through the known class image to acquire a network training model;
the second model obtaining unit is used for respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model;
the correcting unit is used for correcting the activation value of the known class image according to the probability distribution model;
the activation value acquisition unit is used for acquiring the activation value of the unknown class image according to the activation value of the known class image data;
and the image classification judging unit is used for classifying the images according to the activation values of the known class images and the activation values of the unknown class images.
In one embodiment, the first model obtaining unit includes:
the data dividing module is used for dividing the acquired known class images into a training set and a test set;
the first result generation module is used for training the deep convolutional neural network through the image data of the training set, testing the classification performance of the trained deep convolutional neural network through the image data of the test set, and outputting a network classification result;
the second result generation module is used for carrying out supervision operation on the network classification result through a loss function to obtain a supervision operation result;
and the parameter adjusting module is used for adjusting the network parameters of the deep convolutional neural network according to the supervision operation result.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the image classification method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described image classification method.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: according to the embodiment of the invention, the deep convolution neural network is trained through the known class image to obtain a network training model; respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model; correcting the activation value of the known class image according to the probability distribution model; acquiring an activation value of an unknown class image according to the activation value of the known class image data; classifying the images according to the activation values of the known class images and the activation values of the unknown class images; by training the deep convolutional neural network, the recognition capability of the image is optimized, and the performance of the network training model on image classification is improved; by establishing a probability distribution model for each type of sample and correcting the image activation value, the classification of the images can be better described, the accuracy and the rationality of image classification are improved, and the problem that the images of unknown types are wrongly classified is avoided; reasonable classification of images except for the training set category in the known category images in practical application is realized; has strong usability and practicability.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of an image classification method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of loss function versus network supervision according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of an implementation process for obtaining a network training model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an implementation process for building a probability distribution model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a flow chart of implementing a modified activation value according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of an implementation of classifying an image according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an image classification apparatus according to a second embodiment of the present invention;
fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, which is a schematic diagram illustrating an implementation flow of an image classification method provided in an embodiment of the present invention, the method is intended to solve the problems that the current image classification method cannot be directly extended to apply to identify an image class that does not appear in a network model training process, and the image classification method lacks mobility and flexibility. By the method, the image types which are discovered and recorded at present can be reasonably and correctly identified and classified, the images of unknown types can be selected, and the actual requirements of identifying application scenes based on open-set images are met. As shown, the method comprises the following steps:
and S101, training the deep convolutional neural network through the known class image to obtain a network training model.
In the embodiment of the invention, the known class image is a selected fixed class image, and the deep convolutional neural network is trained under the condition of a closed set; the categories relative to the images in the open-set condition are known categories, and the images in the open-set condition include images of unknown categories. The known class images comprise training images and test images for the neural network of the depth rolling machine, and the class space of the training images is completely the same as that of the test images; for example, the known class images include 500 images, which contain 5 classes, each of which includes 100 images, 80 of which belong to the training images and 20 of which belong to the test images.
In addition, a corresponding classification base network is selected according to the number of classes of the data set of the known class image and the complexity of the data set, for example, the selected base network may include, but is not limited to, a residual network ResNet50 network.
It should be noted that, in the process of training the deep convolutional neural network through the known class images, a network supervision part is also arranged for the output result of the network training model; the network supervision part calculates a loss function value of the deep convolutional neural network through the loss function, the gradient rotation of a network training model is realized through back propagation, the deep convolutional neural network is optimized and parameters are updated, and the image recognition capability of the deep convolutional neural network in a training stage is improved. The loss function may include superposition of a plurality of loss functions, for example, a cross entropy loss function and a central loss function are used to jointly monitor a network classification result, as shown in fig. 2, a schematic diagram of the loss function on network monitoring is shown, after an image is input to the deep convolutional neural network, a result is output through a network training model, the cross entropy loss function and the central loss function monitor the network training model, and a rotation gradient of the monitoring result is used to promote updating of parameters of the network training model and improve the recognition capability of the network training model image.
As a specific implementation example of step S101, as shown in the implementation flow diagram of obtaining a network training model shown in fig. 3, training a deep convolutional neural network through a known class image to obtain the network training model includes:
step S1011, dividing the acquired known class images into a training set and a test set.
Step S1012, training the deep convolutional neural network through the images of the training set, testing the classification performance of the deep convolutional neural network through the images of the test set, and outputting a network classification result.
In this embodiment, a data set of known class images is collected and sorted, and the data set is divided as a closed space set, which may be divided into a training set and a test; for example, the data set of the known category images includes 500 images, which totally contains 5 categories, each category includes 100, of which 80 belong to the training set and 20 belong to the testing set; the test set and the training have the same class space, and each includes 5 classes of images. And training the deep convolutional neural network through the images of the training set, detecting the classification performance of the deep convolutional neural network by using the images of the test set, and outputting a network classification result.
And S1013, performing supervision operation on the network classification result through a loss function to obtain a supervision operation result.
In this embodiment, the set Loss function may be a superposition of multiple Loss functions, including a Cross-Entropy Loss function (Cross-Entropy Loss), and the expression is:
where x is the image input to the network training model, C is the total number of image classes in the dataset, and yiIndicates whether the input image belongs to the i-th class image (1 indicates belonging, 0 indicates not belonging), and P (y)i| x) represents the probability that the input image belongs to the ith class.
The set Loss functions further include a central Loss function (Center Loss), which supervises the network classification result, and the central Loss function is expressed as:
Lcenter=0.5*||x-xc|| (2),
wherein x represents the feature extracted by the input image through the deep convolution neural network,xcA central feature vector representing a category to which the input image belongs.
In the training process of the deep convolutional neural network, the two loss functions are combined, the two loss functions are used for jointly monitoring the deep convolutional neural network, and therefore a comprehensive loss function is obtained, and the expression is as follows:
Ltotal=λcross_entropyLcross_entropy+λcenterLcenter(3),
wherein λ iscross_entropyAnd λcenterRespectively representing the weights corresponding to the two loss functions, typically λcross_entropySet to 1, let λcenterIs set to be 8 x 10-7。
And calculating a loss function value of the deep convolutional neural network through the set loss function, and reversely transmitting the loss function value to realize gradient return of the deep convolutional neural network so as to generate a supervision operation result.
And step S1014, adjusting the network parameters of the deep convolutional neural network according to the supervision operation result.
In this embodiment, the network parameters are hyper-parameters set before the deep convolutional neural network learning, including but not limited to the learning rate and batch size of the deep convolutional neural network, and by selecting appropriate hyper-parameters, training is performed to obtain parameter data, and through loss function supervision operation, the set hyper-parameters of the deep convolutional neural network are continuously updated and optimized, and more optimal hyper-parameters are selected, so that the image recognition and classification performance of the network is improved.
Through the embodiment, a plurality of loss functions are utilized to monitor the network classification result together, a cross entropy function is introduced, and meanwhile, a central loss function is also set, so that the intra-class distance of each class of samples in the image is reduced in the training process, the inter-class distance between different classes of samples is increased, and the classification result is more distinctive; meanwhile, the center loss function is also beneficial to the calculation of the center vector of each type of samples in the follow-up process; when the proportion of the two loss functions is synthesized, the two loss functions are combined in a weighting mode, and the accuracy of image classification of the network training model is improved.
And S102, respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model.
In this embodiment, it is known that there is a certain probability in the distribution of each type of sample in the class image, a trained network training model is used to respectively establish a corresponding probability distribution model for each type of sample, before the probability distribution model is determined, model parameters corresponding to the type of sample need to be calculated through the existing samples, so as to determine the probability distribution to which the sample in each type belongs, and provide probability judgment for judging whether the image belongs to the known class.
Through the probability distribution model, the sample distribution of each class of the image is analyzed, the image classification can be better described, whether the image belongs to a certain class or not is judged according to the probability, and the accuracy of the image classification of the network training model is improved.
As a specific implementation example of step S102, as shown in the implementation flow diagram of establishing a probability distribution model shown in fig. 4, respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model, includes:
step S1021, obtaining a mean vector of each type of sample in the known type of image.
In this embodiment, a sample with correct classification in a training set in an image of a known class is selected, an activation value (activation) of an input image is calculated through a network training model, the input image is a sample with correct classification, an average value of the activation values of each class of samples is calculated, a mean vector (mean vector) of each class of samples is obtained, and a calculation formula is as follows:
wherein u iscIs the mean vector of class c samples, NcThe correct number of samples for class c classification,the activation value of the nth sample in the class c samples.
And extracting the activation value of each type of correctly classified sample through a network training model, and calculating a mean vector corresponding to each type of sample.
Step S1022, calculate the distance between each class of samples in the known class image and the mean vector.
In this embodiment, in the training set of known class images, for correctly classified samples, a distance between each class of samples and a mean vector of the class of samples is calculated, where the distance may include an euclidean distance (euclidean distance) and a Cosine distance (Cosine distance), which is not limited herein. Taking the calculation mode of Euclidean distance (Euclidean) and Cosine distance (Cosine distance) as an example, the calculated Euclidean distance deuclideaAnd cosine distance dcosineThrough weighted combination, as the final distance, the calculation formula is:
dtotal=λeuclideandeuclidean+λcosinedcosine(5);
wherein λ iseuclideanAnd λcosineRespectively, a weighting coefficient, lambda, corresponding to the two distanceseuclideanSet to 1/200, λcosineIs set to 1.
And S1023, selecting a plurality of input samples from each type of samples according to the distance and a preset proportion.
In this embodiment, the final distances calculated for each type of samples are sorted according to the extreme value estimation theory, a plurality of samples are obtained according to a preset proportion, and the selected sample is a sample with a larger distance, which is a sample with a larger distance and is the top-ranked sample of the corresponding final distance value.
The calculation formula for obtaining the number of the samples according to the preset proportion is as follows:
Ndf=λNc(6);
wherein N iscThe number of samples for the correct classification of class c, wherein lambda is the proportion of samples selected from class c samples, the lambda can be 20-40%, and N isdfThe number of samples selected for category c for use as input samples.
And taking a plurality of selected samples of a certain class as input samples of the probability distribution model, and estimating model parameters in the probability distribution model corresponding to the samples of the class according to the input samples.
And step S1024, estimating model parameters of the probability distribution model corresponding to the input sample type according to the input samples.
In this embodiment, the probability distribution model may be a weber distribution model, and the expression of the probability distribution model is as follows:
wherein, wnInputting the probability of a Weber distribution model estimate for a class of input samples, atestFor the activation value extracted from the input sample, α is the number of the selected sample classes with larger activation values, and n is in the range of [1, α ]]Gamma is the control coefficient of the probability of the Weibull distribution model, taun、κn、λnParameters representing the nth weber distribution model.
Inputting the selected input sample into the expression (7), calculating parameters of the Weber distribution model corresponding to the sample category by controlling the proportion of the sample at the tail of the Weber distribution, and determining the Weber distribution model corresponding to a certain sample according to each group of parameters.
According to the embodiment, when the probability distribution model of each type of sample is determined, firstly, the mean vector, namely the central vector, of each type of sample is calculated, the distance between the type of sample and the mean vector of the type of sample is calculated, and the sample is selected according to a certain proportion and used as the model parameter of the input sample estimation probability distribution model; the combination of Euclidean distance and cosine distance is adopted in the distance calculation, and the corresponding probability model is estimated by utilizing the distance between each type of sample and the corresponding central vector based on the angle of the sample probability distribution, so that the estimation accuracy of the probability distribution model is improved; the probability distribution model can be used for acquiring the probability that whether the input test sample belongs to a certain type of sample and the corresponding sample, so that the image classification performance and the image classification accuracy are improved, and the method is more practical.
And S103, correcting the activation value of the known class image according to the probability distribution model.
In this embodiment, according to the training set of the known class images and the images of the test set, the activation value corresponding to the test image can be extracted by inputting the test image into the network training model, and the extracted activation value is the output activation value before the normalization function processing.
And calculating the probability value of the test image of a certain category belonging to the category by using the probability distribution model, and multiplying the probability value by the activation value of the test image to obtain the corrected activation value.
Specifically, as shown in the schematic implementation flow diagram of modifying the activation value shown in fig. 5, modifying the activation value of the known class image according to the probability distribution model includes:
and step S1031, extracting a first activation value of the test sample in the known class image through the network training model.
In this embodiment, the class pattern is knownThe test samples of different classes are input into the network training model, and the first activation values a of the test samples of different classes can be extractedtestThe first activation value is the activation value before the test sample activation value is subjected to the normalization function processing.
It should be noted that, in the present embodiment, the first activation value does not represent only one activation value, but a plurality of activation values obtained for different classes of test samples without being processed by the normalization function.
Step S1032, according to the first activation value, a preset number of sample categories are selected from the test samples.
In this embodiment, according to the extreme value estimation theory, for a plurality of sample categories, the acquired first activation values are sorted according to size, and a preset number of sample categories with larger activation values arranged in front are selected, for example, the number of the selected sample categories is α.
Step S1033, calculating the probability of the test sample in the preset number of sample categories according to the probability distribution model corresponding to the preset number of sample categories and the first activation value of the test sample in the sample category.
In this embodiment, according to the expression (7) of the probability distribution model determined in step S1024 and the activation value of the test sample with the higher activation value selected from the sample categories, the belonging probability of the selected test sample corresponding to a certain category is calculated.
It should be noted that, the probability control coefficient γ is added to the probability distribution model, so as to reduce the probability value, and avoid the problem that the activation value of a certain specific class of test sample occupies the main component, so that some samples are wrongly classified into unknown picture classes.
Step S1034, correcting the first activation values of the test samples in the preset number of sample categories according to the belonged probability, and acquiring second activation values.
In this embodiment, the probability of the test sample belonging is obtained through a probability distribution model, and the activation value of the test sample of the sample class with a larger activation value is corrected according to the probability of the belonging, where the calculation formula of the correction is:
wherein,for the first activation value, w, of the test sample in the selected sample class with the greater activation valuecIs the probability to which the test sample corresponds,is a second activation value of the test sample.
According to the embodiment, the sample category with the larger activation value of the test sample in the known category image is selected, the activation value of the test sample is input into the probability distribution model to calculate the belonging probability of the test sample, the original activation value of the test sample is corrected according to the belonging probability to obtain the corrected activation value, the size of the activation value of the test sample is better controlled, the problem that the activation value of the sample of a certain specific category accounts for the main component to cause that part of the sample is wrongly classified into the unknown image category is solved, and the accuracy of image classification is improved.
And step S104, acquiring the activation value of the unknown class image according to the activation value of the known class image data.
In this embodiment, for the α sample classes chosen, the activation values of the unknown class images are constructed from the first and second activation values of the test samples.
Specifically, acquiring the activation value of the unknown class image according to the activation value of the known class image data includes:
calculating an activation value for an image of an unknown class from the first activation value and the second activation value of a test sample in a sample classThe calculation formula is as follows:
wherein,to test the first activation value of the sample,and C is any class C sample in the total class number.
And step S105, classifying the images according to the activation values of the known class images and the activation values of the unknown class images.
In the embodiment, through the construction of the activation value of the unknown class image, the activation value of the known class image is combined to form an activation value of C +1 dimension, wherein C is the total number of classes under the condition of the closed set of the known class images; and carrying out normalization processing on the activation value of the C +1 dimension, and expanding the image classification under the closed set condition into the image classification under the open set condition.
Specifically, as shown in the schematic flow chart of fig. 6, the classifying an image according to the activation value of the known class image and the activation value of the unknown class image includes:
step S1051, performing normalization processing on the activation value of the image of the known category and the activation value of the image of the unknown category, and acquiring a new activation value of the image.
In this embodiment, a normalization function softmax function is used to normalize the modified C +1 micro-activation value, that is, both the activation value of the image of the known category and the activation value of the image of the unknown category are normalized, so as to obtain a new C + 1-dimensional activation value in a new range from 0 to 1, where the new activation value includes the activation value of the image of the unknown category, and a calculation formula of the new activation value is as follows:
wherein,for the activation value of the modified class c image, pcThe probability that the current test image belongs to the class c image after normalization is obtained.
According to the formula (10), a C + 1-dimensional probability vector P can be obtainedc。
Step S1052, selecting the pending category value corresponding to the current test image with the largest activation value from the new activation values.
In this embodiment, the C + 1-dimensional probability vector is indexed, the largest activation value of the new activation values is extracted, the current test image corresponding to the largest activation value is obtained, and the undetermined category value C corresponding to the current test image is further obtained*。
Step S1053, judging whether the undetermined class value corresponds to the unknown class value.
In this embodiment, the undetermined category value c corresponding to the obtained current test image is obtained*Judging the undetermined class value c*Is equal to the unknown class value C + 1.
Step S1054, if yes, refusing to identify the current test image, and judging the current test image as an undefined type.
In this embodiment, if the pending type is determinedValue c*And if the value is equal to the unknown class value C +1, performing rejection identification on the current test image, determining the current test image as an undefined class, and realizing the classification of the image under the condition of an open set image.
And step S1055, if not, judging whether the activation value corresponding to the current image is smaller than a preset threshold value.
In this embodiment, an image recognition threshold is set, an activation value of the current test image after normalization processing is extracted, and whether the activation value of the current test image is smaller than a preset image recognition threshold is determined. The preset threshold value can be a rejection threshold value, namely a threshold value for rejecting the identification image by the system; or it may also be an acceptance threshold, i.e. a threshold at which the system can accept the image recognition result.
Step S1056, if yes, refusing to identify the current test image, and determining that the current test image is in an undefined category.
In this embodiment, if the activation value corresponding to the current test image is smaller than the preset threshold, the current test image is rejected from being identified, and the current test image is determined as an undefined type; therefore, when images outside the training set of the known class images are encountered, the images can be classified reasonably and correctly.
Step S1057, if not, judging that the current test image belongs to a known class, and classifying the current test image into the known class.
In this embodiment, if the current test image does not belong to the undefined class, the known class images are classified according to the closed set image condition, and it is determined that the current test image belongs to the known class c*。
According to the embodiment, a certain category of pictures is selected to train a deep convolutional neural network model under a closed set image condition, and a network training model with good image classification performance is obtained by selecting a proper hyper-parameter and updating network parameters; estimating a corresponding probability distribution model for each sample class under the condition of a closed set image, and reasonably depicting the sample distribution probability of each class; the image activation value under the condition of the closed set image is corrected through the sample distribution probability, the activation value of an unknown class is constructed, all the activation values are subjected to normalization processing, the image is identified under the condition of a starting image by using the normalized activation value, the unknown class image is rejected, the known class image is correctly classified, the image classification rationality and accuracy are improved, the requirement of an actual application scene is met, and the method has higher practicability.
It should be noted that, within the technical scope of the present disclosure, other sequencing schemes that can be easily conceived by those skilled in the art should also be within the protection scope of the present disclosure, and detailed description is omitted here.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two
Fig. 7 is a schematic diagram of an image classification apparatus according to an embodiment of the present invention, and only a part related to the embodiment of the present invention is shown for convenience of description.
The image classification apparatus includes:
a first model obtaining unit 71, configured to train the deep convolutional neural network through a known class image, and obtain a network training model;
a second model obtaining unit 72, configured to respectively establish a probability distribution model for each type of sample in the known type of image according to the network training model;
a correcting unit 73, configured to correct the activation value of the known class image according to the probability distribution model;
an activation value obtaining unit 74, configured to obtain an activation value of an unknown class image according to the activation value of the known class image data;
an image classification determining unit 75, configured to classify the image according to the activation value of the known class image and the activation value of the unknown class image.
Optionally, the first model obtaining unit includes:
the data dividing module is used for dividing the acquired known class images into a training set and a test set;
the first result generation module is used for training the deep convolutional neural network through the images of the training set, testing the classification performance of the deep convolutional neural network through the images of the test set and outputting a network classification result;
the second result generation module is used for carrying out supervision operation on the network classification result through a loss function to obtain a supervision operation result;
and the parameter adjusting module is used for adjusting the network parameters of the deep convolutional neural network according to the supervision operation result.
According to the embodiment, a certain category of pictures is selected to train a deep convolutional neural network model under a closed set image condition, and a network training model with good image classification performance is obtained by selecting a proper hyper-parameter and updating network parameters; estimating a corresponding probability distribution model for each sample class under the condition of a closed set image, and reasonably depicting the sample distribution probability of each class; the image activation value under the condition of the closed set image is corrected through the sample distribution probability, the activation value of an unknown class is constructed, all the activation values are subjected to normalization processing, the image is identified under the condition of a starting image by using the normalized activation values, the unknown class image is rejected, the known class image is correctly classified, the image classification rationality and accuracy are improved, and the requirement of an actual application scene is met.
It will be apparent to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the mobile terminal is divided into different functional units or modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional modules are only used for distinguishing one functional module from another, and are not used for limiting the protection scope of the application. The specific working process of the module in the mobile terminal may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
EXAMPLE III
Fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 8, the terminal device 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in said memory 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the various graphics classification method embodiments described above, such as the steps 101-105 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 71 to 75 shown in fig. 7.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into a first model acquisition unit, a second model acquisition unit, a modification unit, an activation value acquisition unit, and an image classification determination unit, and the specific functions of the modules are as follows:
the first model acquisition unit is used for training the deep convolution neural network through the known class image to acquire a network training model;
the second model obtaining unit is used for respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model;
the correcting unit is used for correcting the activation value of the known class image according to the probability distribution model;
the activation value acquisition unit is used for acquiring the activation value of the unknown class image according to the activation value of the known class image data;
and the image classification judging unit is used for classifying the images according to the activation values of the known class images and the activation values of the unknown class images.
The terminal device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 8 and does not constitute a limitation of terminal device 8 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the apparatus/terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing the computer program and other programs and data required by the terminal device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. An image classification method, comprising:
training the deep convolution neural network through the known class image to obtain a network training model;
respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model;
correcting the activation value of the known class image according to the probability distribution model;
acquiring an activation value of an unknown class image according to the activation value of the known class image data;
and classifying the images according to the activation values of the known class images and the activation values of the unknown class images.
2. The image classification method of claim 1, wherein training the deep convolutional neural network with known class images to obtain a network training model comprises:
dividing the acquired known class images into a training set and a test set;
training the deep convolutional neural network through the images of the training set, testing the classification performance of the deep convolutional neural network through the images of the test set, and outputting a network classification result;
performing supervision operation on the network classification result through a loss function to obtain a supervision operation result;
and adjusting the network parameters of the deep convolutional neural network according to the supervision operation result.
3. The image classification method according to claim 1, wherein the establishing of the probability distribution model for each class of samples in the known class of images according to the network training model comprises:
obtaining a mean vector of each type of sample in the known type of image;
calculating the distance between each type of sample in the known type of image and the mean vector;
selecting a plurality of input samples from each type of samples according to the distance and a preset proportion;
and estimating model parameters of the probability distribution model corresponding to the input sample category according to the input samples.
4. The image classification method of claim 1, wherein modifying the activation values of the images of the known class according to the probability distribution model comprises:
extracting a first activation value of a test sample in the known class image through the network training model;
selecting a preset number of sample categories from the test samples according to the first activation value;
calculating the probability of the test samples in the sample classes of the preset number according to the probability distribution model corresponding to the sample classes of the preset number and the first activation value of the test samples in the sample classes;
and correcting the first activation values of the test samples in the preset number of sample categories according to the probability to obtain second activation values.
5. The image classification method of claim 4, wherein obtaining the activation value of the unknown class of image from the activation value of the known class of image data comprises:
calculating the activation value of the unknown class image according to the first activation value and the second activation value of the test samples in the selected preset number of sample classesThe calculation formula is as follows:
wherein,to test the first activation value of the sample,and C is any class C sample in the total class number.
6. The image classification method according to claim 1, wherein classifying an image according to the activation value of the known class image and the activation value of the unknown class image comprises:
normalizing the activation value of the image of the known type and the activation value of the image of the unknown type to obtain a new activation value of the image;
selecting a pending class value corresponding to the current test image with the maximum activation value from the new activation values;
judging whether the undetermined class value corresponds to the unknown class value;
if so, refusing to identify the current test image, and judging that the current test image is in an undefined class;
if not, judging whether the activation value corresponding to the current image to be detected is smaller than a preset threshold value;
if so, refusing to identify the current test image, and judging that the current test image is in an undefined class;
if not, judging that the current test image belongs to a known class, and classifying the current test image according to the known class.
7. An image classification apparatus, comprising:
the first model acquisition unit is used for training the deep convolution neural network through the known class image to acquire a network training model;
the second model obtaining unit is used for respectively establishing a probability distribution model for each type of sample in the known type of image according to the network training model;
the correcting unit is used for correcting the activation value of the known class image according to the probability distribution model;
the activation value acquisition unit is used for acquiring the activation value of the unknown class image according to the activation value of the known class image data;
and the image classification judging unit is used for classifying the images according to the activation values of the known class images and the activation values of the unknown class images.
8. The image classification device according to claim 7, wherein the first model acquisition unit includes:
the data dividing module is used for dividing the acquired known class images into a training set and a test set;
the first result generation module is used for training the deep convolutional neural network through the images of the training set, testing the classification performance of the deep convolutional neural network through the images of the test set and outputting a network classification result;
the second result generation module is used for carrying out supervision operation on the network classification result through a loss function to obtain a supervision operation result;
and the parameter adjusting module is used for adjusting the network parameters of the deep convolutional neural network according to the supervision operation result.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811284267.2A CN109376786A (en) | 2018-10-31 | 2018-10-31 | A kind of image classification method, device, terminal device and readable storage medium storing program for executing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811284267.2A CN109376786A (en) | 2018-10-31 | 2018-10-31 | A kind of image classification method, device, terminal device and readable storage medium storing program for executing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109376786A true CN109376786A (en) | 2019-02-22 |
Family
ID=65390741
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811284267.2A Pending CN109376786A (en) | 2018-10-31 | 2018-10-31 | A kind of image classification method, device, terminal device and readable storage medium storing program for executing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109376786A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919241A (en) * | 2019-03-15 | 2019-06-21 | 中国人民解放军国防科技大学 | Hyperspectral unknown class target detection method based on probability model and deep learning |
CN109977781A (en) * | 2019-02-26 | 2019-07-05 | 上海上湖信息技术有限公司 | Method for detecting human face and device, readable storage medium storing program for executing |
CN110059754A (en) * | 2019-04-22 | 2019-07-26 | 厦门大学 | A kind of batch data steganography method, terminal device and storage medium |
CN110135505A (en) * | 2019-05-20 | 2019-08-16 | 北京达佳互联信息技术有限公司 | Image classification method, device, computer equipment and computer readable storage medium |
CN110147456A (en) * | 2019-04-12 | 2019-08-20 | 中国科学院深圳先进技术研究院 | A kind of image classification method, device, readable storage medium storing program for executing and terminal device |
CN110222704A (en) * | 2019-06-12 | 2019-09-10 | 北京邮电大学 | A kind of Weakly supervised object detection method and device |
CN110472675A (en) * | 2019-07-31 | 2019-11-19 | Oppo广东移动通信有限公司 | Image classification method, image classification device, storage medium and electronic equipment |
CN110472681A (en) * | 2019-08-09 | 2019-11-19 | 北京市商汤科技开发有限公司 | The neural metwork training scheme and image procossing scheme of knowledge based distillation |
CN110567967A (en) * | 2019-08-20 | 2019-12-13 | 武汉精立电子技术有限公司 | Display panel detection method, system, terminal device and computer readable medium |
CN110751675A (en) * | 2019-09-03 | 2020-02-04 | 平安科技(深圳)有限公司 | Urban pet activity track monitoring method based on image recognition and related equipment |
CN110826713A (en) * | 2019-10-25 | 2020-02-21 | 广州思德医疗科技有限公司 | Method and device for acquiring special convolution kernel |
CN110909760A (en) * | 2019-10-12 | 2020-03-24 | 中国人民解放军国防科技大学 | Image open set identification method based on convolutional neural network |
CN111612010A (en) * | 2020-05-21 | 2020-09-01 | 京东方科技集团股份有限公司 | Image processing method, device, equipment and computer readable storage medium |
WO2020191988A1 (en) * | 2019-03-23 | 2020-10-01 | 南京智慧光信息科技研究院有限公司 | New category identification method and robot system based on fuzzy theory and deep learning |
CN111930935A (en) * | 2020-06-19 | 2020-11-13 | 普联国际有限公司 | Image classification method, device, equipment and storage medium |
CN112508062A (en) * | 2020-11-20 | 2021-03-16 | 普联国际有限公司 | Open set data classification method, device, equipment and storage medium |
CN112541905A (en) * | 2020-12-16 | 2021-03-23 | 华中科技大学 | Product surface defect identification method based on lifelong learning convolutional neural network |
CN113705446A (en) * | 2021-08-27 | 2021-11-26 | 电子科技大学 | Open set identification method for individual radiation source |
CN113743443A (en) * | 2021-05-31 | 2021-12-03 | 高新兴科技集团股份有限公司 | Image evidence classification and identification method and device |
CN115083442A (en) * | 2022-04-29 | 2022-09-20 | 马上消费金融股份有限公司 | Data processing method, data processing device, electronic equipment and computer readable storage medium |
CN116071600A (en) * | 2023-02-17 | 2023-05-05 | 中国科学院地理科学与资源研究所 | Crop remote sensing identification method and device based on multi-classification probability |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107229942A (en) * | 2017-04-16 | 2017-10-03 | 北京工业大学 | A kind of convolutional neural networks rapid classification method based on multiple graders |
CN107506799A (en) * | 2017-09-01 | 2017-12-22 | 北京大学 | A kind of opener classification based on deep neural network is excavated and extended method and device |
CN107622272A (en) * | 2016-07-13 | 2018-01-23 | 华为技术有限公司 | A kind of image classification method and device |
CN107895170A (en) * | 2017-10-31 | 2018-04-10 | 天津大学 | A kind of Dropout regularization methods based on activation value sensitiveness |
CN107967484A (en) * | 2017-11-14 | 2018-04-27 | 中国计量大学 | A kind of image classification method based on multiresolution |
CN108596258A (en) * | 2018-04-27 | 2018-09-28 | 南京邮电大学 | A kind of image classification method based on convolutional neural networks random pool |
CN108710831A (en) * | 2018-04-24 | 2018-10-26 | 华南理工大学 | A kind of small data set face recognition algorithms based on machine vision |
-
2018
- 2018-10-31 CN CN201811284267.2A patent/CN109376786A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622272A (en) * | 2016-07-13 | 2018-01-23 | 华为技术有限公司 | A kind of image classification method and device |
CN107229942A (en) * | 2017-04-16 | 2017-10-03 | 北京工业大学 | A kind of convolutional neural networks rapid classification method based on multiple graders |
CN107506799A (en) * | 2017-09-01 | 2017-12-22 | 北京大学 | A kind of opener classification based on deep neural network is excavated and extended method and device |
CN107895170A (en) * | 2017-10-31 | 2018-04-10 | 天津大学 | A kind of Dropout regularization methods based on activation value sensitiveness |
CN107967484A (en) * | 2017-11-14 | 2018-04-27 | 中国计量大学 | A kind of image classification method based on multiresolution |
CN108710831A (en) * | 2018-04-24 | 2018-10-26 | 华南理工大学 | A kind of small data set face recognition algorithms based on machine vision |
CN108596258A (en) * | 2018-04-27 | 2018-09-28 | 南京邮电大学 | A kind of image classification method based on convolutional neural networks random pool |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977781A (en) * | 2019-02-26 | 2019-07-05 | 上海上湖信息技术有限公司 | Method for detecting human face and device, readable storage medium storing program for executing |
CN109919241A (en) * | 2019-03-15 | 2019-06-21 | 中国人民解放军国防科技大学 | Hyperspectral unknown class target detection method based on probability model and deep learning |
CN109919241B (en) * | 2019-03-15 | 2020-09-29 | 中国人民解放军国防科技大学 | Hyperspectral unknown class target detection method based on probability model and deep learning |
WO2020191988A1 (en) * | 2019-03-23 | 2020-10-01 | 南京智慧光信息科技研究院有限公司 | New category identification method and robot system based on fuzzy theory and deep learning |
CN110147456A (en) * | 2019-04-12 | 2019-08-20 | 中国科学院深圳先进技术研究院 | A kind of image classification method, device, readable storage medium storing program for executing and terminal device |
CN110147456B (en) * | 2019-04-12 | 2023-01-24 | 中国科学院深圳先进技术研究院 | Image classification method and device, readable storage medium and terminal equipment |
CN110059754A (en) * | 2019-04-22 | 2019-07-26 | 厦门大学 | A kind of batch data steganography method, terminal device and storage medium |
CN110135505B (en) * | 2019-05-20 | 2021-09-17 | 北京达佳互联信息技术有限公司 | Image classification method and device, computer equipment and computer readable storage medium |
CN110135505A (en) * | 2019-05-20 | 2019-08-16 | 北京达佳互联信息技术有限公司 | Image classification method, device, computer equipment and computer readable storage medium |
CN110222704B (en) * | 2019-06-12 | 2022-04-01 | 北京邮电大学 | Weak supervision target detection method and device |
CN110222704A (en) * | 2019-06-12 | 2019-09-10 | 北京邮电大学 | A kind of Weakly supervised object detection method and device |
CN110472675A (en) * | 2019-07-31 | 2019-11-19 | Oppo广东移动通信有限公司 | Image classification method, image classification device, storage medium and electronic equipment |
CN110472681A (en) * | 2019-08-09 | 2019-11-19 | 北京市商汤科技开发有限公司 | The neural metwork training scheme and image procossing scheme of knowledge based distillation |
CN110567967B (en) * | 2019-08-20 | 2022-06-17 | 武汉精立电子技术有限公司 | Display panel detection method, system, terminal device and computer readable medium |
CN110567967A (en) * | 2019-08-20 | 2019-12-13 | 武汉精立电子技术有限公司 | Display panel detection method, system, terminal device and computer readable medium |
CN110751675A (en) * | 2019-09-03 | 2020-02-04 | 平安科技(深圳)有限公司 | Urban pet activity track monitoring method based on image recognition and related equipment |
CN110751675B (en) * | 2019-09-03 | 2023-08-11 | 平安科技(深圳)有限公司 | Urban pet activity track monitoring method based on image recognition and related equipment |
CN110909760A (en) * | 2019-10-12 | 2020-03-24 | 中国人民解放军国防科技大学 | Image open set identification method based on convolutional neural network |
CN110826713A (en) * | 2019-10-25 | 2020-02-21 | 广州思德医疗科技有限公司 | Method and device for acquiring special convolution kernel |
CN110826713B (en) * | 2019-10-25 | 2022-06-10 | 广州思德医疗科技有限公司 | Method and device for acquiring special convolution kernel |
CN111612010A (en) * | 2020-05-21 | 2020-09-01 | 京东方科技集团股份有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN111930935B (en) * | 2020-06-19 | 2024-06-07 | 普联国际有限公司 | Image classification method, device, equipment and storage medium |
CN111930935A (en) * | 2020-06-19 | 2020-11-13 | 普联国际有限公司 | Image classification method, device, equipment and storage medium |
CN112508062A (en) * | 2020-11-20 | 2021-03-16 | 普联国际有限公司 | Open set data classification method, device, equipment and storage medium |
CN112541905B (en) * | 2020-12-16 | 2022-08-05 | 华中科技大学 | Product surface defect identification method based on lifelong learning convolutional neural network |
CN112541905A (en) * | 2020-12-16 | 2021-03-23 | 华中科技大学 | Product surface defect identification method based on lifelong learning convolutional neural network |
CN113743443A (en) * | 2021-05-31 | 2021-12-03 | 高新兴科技集团股份有限公司 | Image evidence classification and identification method and device |
CN113743443B (en) * | 2021-05-31 | 2024-05-17 | 高新兴科技集团股份有限公司 | Image evidence classification and recognition method and device |
CN113705446B (en) * | 2021-08-27 | 2023-04-07 | 电子科技大学 | Open set identification method for individual radiation source |
CN113705446A (en) * | 2021-08-27 | 2021-11-26 | 电子科技大学 | Open set identification method for individual radiation source |
CN115083442A (en) * | 2022-04-29 | 2022-09-20 | 马上消费金融股份有限公司 | Data processing method, data processing device, electronic equipment and computer readable storage medium |
CN115083442B (en) * | 2022-04-29 | 2023-08-08 | 马上消费金融股份有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
CN116071600A (en) * | 2023-02-17 | 2023-05-05 | 中国科学院地理科学与资源研究所 | Crop remote sensing identification method and device based on multi-classification probability |
CN116071600B (en) * | 2023-02-17 | 2023-08-04 | 中国科学院地理科学与资源研究所 | Crop remote sensing identification method and device based on multi-classification probability |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109376786A (en) | A kind of image classification method, device, terminal device and readable storage medium storing program for executing | |
CN111915437B (en) | Training method, device, equipment and medium of money backwashing model based on RNN | |
CN110020592B (en) | Object detection model training method, device, computer equipment and storage medium | |
WO2019200782A1 (en) | Sample data classification method, model training method, electronic device and storage medium | |
CN111860573A (en) | Model training method, image class detection method and device and electronic equipment | |
CN109460793A (en) | A kind of method of node-classification, the method and device of model training | |
CN106845421A (en) | Face characteristic recognition methods and system based on multi-region feature and metric learning | |
CN111027378A (en) | Pedestrian re-identification method, device, terminal and storage medium | |
CN108280477A (en) | Method and apparatus for clustering image | |
CN109993221B (en) | Image classification method and device | |
CN110874604A (en) | Model training method and terminal equipment | |
CN110427835B (en) | Electromagnetic signal identification method and device for graph convolution network and transfer learning | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
CN110879982A (en) | Crowd counting system and method | |
CN114492768A (en) | Twin capsule network intrusion detection method based on small sample learning | |
CN114359738A (en) | Cross-scene robust indoor population wireless detection method and system | |
CN113989519B (en) | Long-tail target detection method and system | |
CN109104257B (en) | Wireless signal detection method and device | |
WO2015146113A1 (en) | Identification dictionary learning system, identification dictionary learning method, and recording medium | |
CN103927529B (en) | The preparation method and application process, system of a kind of final classification device | |
CN106682604B (en) | Blurred image detection method based on deep learning | |
CN116113952A (en) | Distance between distributions for images belonging to intra-distribution metrics | |
CN107944363A (en) | Face image processing process, system and server | |
CN109583492A (en) | A kind of method and terminal identifying antagonism image | |
CN114168788A (en) | Audio audit processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190222 |
|
RJ01 | Rejection of invention patent application after publication |