CN111488950B - Classification model information output method and device - Google Patents
Classification model information output method and device Download PDFInfo
- Publication number
- CN111488950B CN111488950B CN202010407910.7A CN202010407910A CN111488950B CN 111488950 B CN111488950 B CN 111488950B CN 202010407910 A CN202010407910 A CN 202010407910A CN 111488950 B CN111488950 B CN 111488950B
- Authority
- CN
- China
- Prior art keywords
- sample
- classification model
- probabilities
- original
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
- G06F18/24155—Bayesian classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the specification provides a classification model information output method and device, and in the information output method, a sample set of a classification model is obtained. And inputting the first sample into the classification model for any first sample in the sample set to obtain a plurality of original probabilities of the first sample corresponding to each preset class. And determining a classification result of the first sample according to the plurality of original probabilities and a predetermined threshold value. And converting the plurality of original probabilities based on a preset threshold value through a random algorithm to obtain a plurality of randomized probabilities respectively corresponding to the plurality of original probabilities. The random algorithm makes the classification result determined based on the plurality of original probabilities the same as the classification result determined based on the plurality of randomized probabilities. And outputting the classification result of the first sample and a plurality of randomized probabilities.
Description
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a classification model information output method and apparatus.
Background
With the rapid development of machine learning and artificial intelligence, machine learning technology is applied in many fields today to solve the problems faced by many traditional algorithms. However, machine learning, particularly deep learning, makes the accuracy and recall of the model dependent on the quantity and quality of the data (samples) due to its unexplainable nature. In a real-world scenario, the use of a large number of samples may make training time unacceptable and may not find so many high quality data sources. And a specific learning model generated by using a limited sample is easy to attack by a specific countersample generated in a targeted manner.
Disclosure of Invention
One or more embodiments of the present specification describe a classification model information output method and apparatus, which can ensure the security of a classification model.
In a first aspect, a classification model information output method is provided, including:
obtaining a sample set of a classification model;
for any first sample in the sample set, inputting the first sample into the classification model to obtain a plurality of original probabilities of the first sample corresponding to each predetermined category;
determining a classification result of the first sample according to the original probabilities and a preset threshold value;
converting the original probabilities based on the preset threshold value through a random algorithm to obtain a plurality of randomized probabilities corresponding to the original probabilities respectively; the stochastic algorithm causes a classification result determined based on the plurality of original probabilities to be the same as a classification result determined based on the plurality of randomized probabilities;
outputting a classification result of the first sample and a plurality of randomized probabilities.
In a second aspect, there is provided a classification model information output apparatus including:
an obtaining unit, configured to obtain a sample set of the classification model;
an input unit, configured to input a first sample to the classification model for any first sample in the sample set acquired by the acquisition unit, so as to obtain a plurality of original probabilities that the first sample corresponds to each predetermined category;
a determining unit, configured to determine a classification result of the first sample according to the plurality of original probabilities and a predetermined threshold;
a conversion unit, configured to convert, by a random algorithm, the multiple original probabilities based on the predetermined threshold value to obtain multiple randomized probabilities corresponding to the multiple original probabilities, respectively; the stochastic algorithm causes a classification result determined based on the plurality of original probabilities to be the same as a classification result determined based on the plurality of randomized probabilities;
an output unit, configured to output the classification result of the first sample determined by the determination unit and the plurality of randomized probabilities converted by the conversion unit.
In a third aspect, there is provided a computer storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
In a fourth aspect, there is provided a computing device comprising a memory having stored therein executable code and a processor that, when executing the executable code, implements the method of the first aspect.
In the classification model information output method and apparatus provided in one or more embodiments of the present specification, for a sample, after a corresponding classification result is determined by a classification model, a plurality of original probabilities of the sample corresponding to each predetermined class are converted by a random algorithm based on a predetermined threshold, so as to obtain a plurality of randomized probabilities corresponding to the plurality of original probabilities, respectively. The stochastic algorithm herein may be such that the classification result determined based on the plurality of original probabilities is the same as the classification result determined based on the plurality of randomized probabilities. And then, outputting the classification result of the sample and a plurality of randomized probabilities. That is, the solution provided in this specification randomly perturbs the reference data (i.e., the original probability) for determining the classification result, while ensuring that the classification result of the sample is not changed. Therefore, the problem that the classification model is easy to attack due to leakage of reference data of the classification model can be avoided, and the purpose of protecting the classification model can be further achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a classification model information output method provided in the present specification;
FIG. 2 is a flowchart of a classification model information output method provided in an embodiment of the present specification;
fig. 3 is a schematic diagram of a classification model information output device according to an embodiment of the present disclosure.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
Before describing the solution provided in the present specification, the inventive concept of the present solution will be explained below.
As described in the background section, the conventional training process of the machine learning model has the following problems: the use of massive samples can make training time unacceptable and also fail to find so many high quality data sources. And a specific learning model generated by using a limited sample is easy to attack by a specific countersample generated in a targeted manner. For example, in a scenario application with a high security requirement (such as automatic driving, face payment, etc.), a targeted countersample attack often can generate serious accidents and capital loss, and therefore, a learning model in a specific scenario needs to have a higher immunity requirement for a similar countersample attack.
Therefore, it is urgently needed to provide a scheme so as to solve the problem that the machine learning model is easy to attack.
For example, taking a machine learning model as a classification model, in order to solve the problem of easy attack of the classification model, the inventor proposes that for a certain sample, after a corresponding classification result is determined by the classification model, a plurality of original probabilities of the sample corresponding to each predetermined class are converted by a random algorithm based on a predetermined threshold value, so as to obtain a plurality of randomized probabilities corresponding to the plurality of original probabilities respectively. The stochastic algorithm herein may be such that the classification result determined based on the plurality of original probabilities is the same as the classification result determined based on the plurality of randomized probabilities. And then, outputting the classification result of the sample and a plurality of randomized probabilities. That is to say, the scheme provided in this specification randomly perturbs the reference data for determining the classification result while ensuring that the classification result of the sample is not changed. Therefore, the problem that the classification model is easy to attack due to leakage of reference data of the classification model can be avoided, the training difficulty of resisting attack can be improved, and the purpose of protecting the classification model is achieved.
Fig. 1 is a schematic view of an application scenario of the classification model information output method provided in this specification. In fig. 1, the classification model may be loaded by a machine learning engine. Thereafter, a sample set of classification models is obtained. And inputting the first sample into the classification model for any first sample in the sample set to obtain a plurality of original probabilities of the first sample corresponding to each preset class. And determining a classification result of the first sample according to the plurality of original probabilities and a predetermined threshold value. And converting the plurality of original probabilities based on a preset threshold value through a random algorithm to obtain a plurality of randomized probabilities respectively corresponding to the plurality of original probabilities. The stochastic algorithm here makes the classification result determined based on the plurality of original probabilities the same as the classification result determined based on the plurality of randomized probabilities. And outputting the classification result of the first sample and a plurality of randomized probabilities.
Fig. 2 is a flowchart of a classification model information output method according to an embodiment of the present disclosure. The execution subject of the method may be a device with processing capabilities: a server or a system or device. As shown in fig. 2, the method may specifically include:
step 202, a sample set of classification models is obtained.
The classification model can be realized by algorithms such as neural networks, gradient decision trees, Bayesian classification, support vector machines, random forests and the like. In one example, the classification model is pre-trained based on a set of training samples and has been evaluated by a corresponding set of test samples. In one example, the classification model may be trained based on a cross entropy loss function, a hinge loss function, an exponential loss function, and the like.
In addition, the classification model can be used for carrying out business processing. The business processes herein may include, but are not limited to, business processes based on image recognition (e.g., face recognition, object detection, etc.), business processes based on user classification (e.g., user population segmentation, user service customization, etc.), business processes based on audio recognition (e.g., voice recognition, voiceprint analysis, etc.), and business processes based on text analysis (e.g., voice analysis, intent recognition, etc.).
Specifically, if the business process is a business process based on image recognition, the samples in the training sample set and the test sample set may be pictures, and the classification model obtained by training may be a picture classification model. If the business process is a business process based on audio recognition, the samples in the training sample set and the test sample set may be audio, and the classification model obtained by training may be an audio classification model. If the business process is a business process based on text analysis, the samples in the training sample set and the test sample set may be texts, and the classification model obtained by training may be a text classification model.
It should be further noted that the samples in the sample set described in step 202 may be selected from the training sample set and/or the testing sample set of the classification model. I.e. it may be a picture, text or audio etc. In one example, the samples in the sample set described in step 202 satisfy the following condition: covering each class of the current business processing of the classification model as much as possible.
Step 204, inputting the first sample into the classification model for any first sample in the sample set, and obtaining a plurality of original probabilities of the first sample corresponding to each predetermined category.
It should be understood that when the classification model is used for business processes for picture analysis, the predetermined classes may be referred to as picture classes. When the classification model is used for performing a business process of text analysis, each of the predetermined categories may be a text category. When the classification model is used for business processing for audio analysis, each of the predetermined classes may be referred to as an audio class. Further, the sum of the plurality of original probabilities corresponding to the respective predetermined classes may be 1.
Step 206, determining a classification result of the first sample according to the plurality of original probabilities and a predetermined threshold.
The predetermined threshold may be set manually based on empirical values. Specifically, different predetermined thresholds are set for different traffic processes currently performed by the classification model.
In one example, the step of determining the classification result of the first sample may include: and if the original probabilities are all smaller than the preset threshold value, determining that the classification result of the first sample is that the first sample does not belong to any one of the preset classes. And if at least one of the original probabilities is not less than the preset threshold, determining that the classification result of the first sample belongs to the class corresponding to the maximum probability in each preset class.
As can be seen from the above, the classification result of the first sample described in this specification may include the following two types: not belonging to any one of the categories and to one of the predetermined categories.
And 208, converting the plurality of original probabilities based on a preset threshold value through a random algorithm to obtain a plurality of randomized probabilities corresponding to the plurality of original probabilities respectively.
The stochastic algorithm here may be such that the classification result determined based on the plurality of original probabilities is the same as the classification result determined based on the plurality of randomized probabilities.
In one example, the step of converting the plurality of original probabilities may include: if the original probabilities are all smaller than the preset threshold, randomly generating a numerical value smaller than the preset threshold as a random probability corresponding to the original probability for each original probability in the original probabilities. If at least one of the original probabilities is not less than a predetermined threshold, randomly generating a numerical value not less than the predetermined threshold as a randomized probability corresponding to the maximum probability for the maximum probability of the original probabilities. For each other probability of the plurality of original probabilities (i.e., each original probability of the plurality of original probabilities except the maximum probability), a numerical value smaller than the randomized probability corresponding to the maximum probability is randomly generated as the randomized probability corresponding to the other probability.
In another example, if at least one of the plurality of original probabilities is not less than a predetermined threshold, two value ranges are generated with the predetermined threshold as a boundary, where a lower limit of the first value range is the predetermined threshold, and an upper limit of the second value range is the predetermined threshold and does not include the predetermined threshold. And then, based on the first value range, generating a randomization probability corresponding to the maximum probability in the multiple original probabilities. And generating a randomized probability corresponding to each other probability in the plurality of original probabilities based on the second value range.
The following explains why the classification result determined based on the plurality of original probabilities is the same as the classification result determined based on the plurality of randomized probabilities:
first, a case where each of the plurality of original probabilities is smaller than a predetermined threshold value will be described. As described above, in this case, the classification result of the first sample is that the first sample does not belong to any of the predetermined classes. According to the above conversion steps, the randomization probability corresponding to each original probability in this case is a value smaller than a predetermined threshold. Therefore, the determined classification result is that the first sample does not belong to any one of the predetermined classes based on the plurality of randomized probabilities respectively corresponding to the plurality of original probabilities. That is, in this case, the classification result determined based on the plurality of original probabilities is the same as the classification result determined based on the plurality of randomized probabilities.
Next, a case where at least one of the plurality of original probabilities is not less than a predetermined threshold will be described. As described above, in this case, the classification result of the first sample is that the first sample belongs to the category corresponding to the maximum probability among the predetermined categories. According to the above conversion steps, the randomization probability corresponding to the maximum probability in this case is always kept maximum and is not less than the predetermined threshold. Therefore, the determined classification result is the first sample belonging to the class corresponding to the maximum probability in the predetermined classes based on the plurality of randomized probabilities respectively corresponding to the plurality of original probabilities. That is, in this case, the classification result determined based on the plurality of original probabilities is the same as the classification result determined based on the plurality of randomized probabilities.
By combining the above, the scheme provided by the specification can realize random disturbance on the original probability of the sample under the condition of ensuring that the classification result of the sample is not changed.
It should be noted that, since the classification result determined based on the plurality of original probabilities is the same as the classification result determined based on the plurality of randomized probabilities, the step 206 may not be executed first, and after the step 208 is completed, the following steps are executed: and determining a classification result of the first sample according to the plurality of randomization probabilities and a predetermined threshold.
In addition, in practical applications, there may be no sequence between the step 206 and the step 208, that is, the step 208 may be executed first, and then the step 206 is executed. Alternatively, step 206 and step 208 are executed in parallel, which is not described herein again.
And step 210, outputting the classification result of the first sample and a plurality of randomized probabilities.
It should be understood that the above steps 204-210 are only output processes describing the classification result of one sample in the sample set and the plurality of randomized probabilities. Similarly, a classification result for each sample in the sample set and a plurality of randomized probabilities can be output.
In one example, after classifying the classification result and the plurality of randomized probabilities for each sample in the model output sample set, i.e., after obtaining the model output data, the model output data may be used to assist in training a generation-resistant network (GAN) to verify whether the training process of the GAN is interfered. Or, verifying whether the training difficulty of the GAN is increased. Whether the training difficulty here increases can be measured by whether the time complexity, etc., increases.
In one example, the training assistance process can be implemented by adding a consideration of the randomized probability of the sample to the loss function of the GAN.
The GAN is used to generate a confrontation sample of a simulation attack classification Model, which includes a generator (Generative Model) and a discriminator (Discriminative Model), which are learned to game with each other to generate good output. In the training process, the generator aims to generate simulation samples approaching the real samples as much as possible according to the given data distribution of the real samples, and the discriminator aims to distinguish the simulation samples from the real samples as much as possible, so that the training of the generator and the discriminator forms a dynamic game process. Finally, the generator can generate samples directly according to the learned data distribution after training is completed.
It should be understood that, because of the multiple randomized probabilities of the samples output by the present scheme, the original probabilities of the samples are randomly perturbed while the classification result of the samples is not changed. Therefore, the uncertainty of the model can be greatly increased on the premise of not changing the functions of the original classification model, so that the difficulty of searching the gradient descending direction can be increased when the model output data is used for assisting in training the GAN, and the purpose of interfering the GAN training process can be achieved. In the case where the GAN training process is disturbed, the difficulty of generating the challenge samples is increased. If the generation difficulty of the countermeasure sample is increased, the difficulty of attack of an attacker is increased after the classification model is released to be online, and therefore the purpose of protecting the classification model can be achieved.
It should be noted that, although the embodiment of the present specification takes a machine learning model as an example of a classification model, a method of outputting information thereof is described. In fact, the scheme provided in the present specification can be applied to any model that can output numerical data such as probability or score, and the present specification does not limit this.
Corresponding to the above classification model information output method, an embodiment of the present specification further provides a classification model information output apparatus, as shown in fig. 3, the apparatus may include:
an obtaining unit 302, configured to obtain a sample set of the classification model.
The classification model may include any one of: neural networks, gradient decision trees, bayesian classification, support vector machines, and random forests.
An input unit 304, configured to input a first sample of any first sample in the sample set acquired by the acquisition unit 302 into the classification model, so as to obtain a plurality of original probabilities that the first sample corresponds to each predetermined class.
A determining unit 306, configured to determine a classification result of the first sample according to the plurality of original probabilities and a predetermined threshold.
The determining unit 306 may specifically be configured to:
and if the original probabilities are all smaller than the preset threshold value, determining that the classification result of the first sample is that the first sample does not belong to any one of the preset classes.
And if at least one of the original probabilities is not less than the preset threshold, determining that the classification result of the first sample belongs to the class corresponding to the maximum probability in each preset class.
The conversion unit 308 is configured to convert, by using a random algorithm, the multiple original probabilities based on a predetermined threshold to obtain multiple randomized probabilities corresponding to the multiple original probabilities respectively. The random algorithm makes the classification result determined based on the plurality of original probabilities the same as the classification result determined based on the plurality of randomized probabilities.
The conversion unit 308 may specifically be configured to:
if the original probabilities are all smaller than the preset threshold, for each original probability, randomly generating a numerical value smaller than the preset threshold as the corresponding randomization probability.
And if at least one of the original probabilities is not less than a preset threshold, randomly generating a numerical value which is not less than the preset threshold as the corresponding randomized probability according to the maximum probability. For each of the other probabilities, a value less than the randomization probability corresponding to the maximum probability is randomly generated as its corresponding randomization probability.
An output unit 310, configured to output the classification result of the first sample determined by the determining unit 306 and the plurality of randomized probabilities converted by the converting unit 308.
Optionally, the classification model is a text classification model, and each sample in the sample set is a text; or, the classification model is a picture classification model, and each sample in the sample set is a picture; or, the classification model is an audio classification model, and each sample in the sample set is an audio.
The functions of each functional module of the device in the above embodiments of the present description may be implemented through each step of the above method embodiments, and therefore, a specific working process of the device provided in one embodiment of the present description is not repeated herein.
The classification model information output device provided in an embodiment of the present specification can achieve the purpose of protecting a classification model.
In another aspect, embodiments of the present specification provide a computer-readable storage medium having a computer program stored thereon, which, when executed in a computer, causes the computer to perform the method shown in fig. 2.
In another aspect, embodiments of the present description provide a computing device comprising a memory having stored therein executable code, and a processor that, when executing the executable code, implements the method illustrated in fig. 2.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or may be embodied in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a server. Of course, the processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the present specification, and are not intended to limit the scope of the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.
Claims (10)
1. A classification model information output method includes:
obtaining a sample set of a classification model;
for any first sample in the sample set, inputting the first sample into the classification model to obtain a plurality of original probabilities of the first sample corresponding to each predetermined category;
determining a classification result of the first sample according to the original probabilities and a preset threshold value;
if the original probabilities are all smaller than the preset threshold, randomly generating a numerical value smaller than the preset threshold as a corresponding randomization probability for each original probability;
if at least one of the original probabilities is not less than the predetermined threshold, randomly generating a numerical value not less than the predetermined threshold as a corresponding randomization probability for the maximum probability; for each other probability, randomly generating a numerical value smaller than the randomization probability corresponding to the maximum probability as the corresponding randomization probability;
outputting a classification result of the first sample and a plurality of randomized probabilities.
2. The method of claim 1, the determining a classification result for the first sample based on the plurality of raw probabilities and a predetermined threshold, comprising:
if the original probabilities are all smaller than the preset threshold value, determining that the classification result of the first sample is that the first sample does not belong to any one of the preset classes;
and if at least one of the original probabilities is not smaller than the predetermined threshold, determining that the classification result of the first sample belongs to the class corresponding to the maximum probability in each predetermined class.
3. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
the classification model is a text classification model, and each sample in the sample set is a text; or the like, or, alternatively,
the classification model is a picture classification model, and each sample in the sample set is a picture; or the like, or, alternatively,
the classification model is an audio classification model, and each sample in the sample set is an audio.
4. The method of claim 1, the classification model comprising any of: neural networks, gradient decision trees, bayesian classification, support vector machines, and random forests.
5. A classification model information output apparatus comprising:
an obtaining unit, configured to obtain a sample set of the classification model;
an input unit, configured to input a first sample to the classification model for any first sample in the sample set acquired by the acquisition unit, so as to obtain a plurality of original probabilities that the first sample corresponds to each predetermined category;
a determining unit, configured to determine a classification result of the first sample according to the plurality of original probabilities and a predetermined threshold;
a conversion unit, configured to randomly generate, for each original probability in the plurality of original probabilities, a numerical value smaller than the predetermined threshold as a corresponding randomized probability if the plurality of original probabilities are all smaller than the predetermined threshold;
if at least one of the original probabilities is not less than the predetermined threshold, randomly generating a numerical value not less than the predetermined threshold as a corresponding randomization probability for the maximum probability; for each other probability, randomly generating a numerical value smaller than the randomization probability corresponding to the maximum probability as the corresponding randomization probability;
an output unit, configured to output the classification result of the first sample determined by the determination unit and the plurality of randomized probabilities converted by the conversion unit.
6. The apparatus of claim 5, the determining unit being specifically configured to:
if the original probabilities are all smaller than the preset threshold value, determining that the classification result of the first sample is that the first sample does not belong to any one of the preset classes;
and if at least one of the original probabilities is not smaller than the predetermined threshold, determining that the classification result of the first sample belongs to the class corresponding to the maximum probability in each predetermined class.
7. The apparatus of claim 5, wherein the first and second electrodes are disposed in a common plane,
the classification model is a text classification model, and each sample in the sample set is a text; or the like, or, alternatively,
the classification model is a picture classification model, and each sample in the sample set is a picture; or the like, or, alternatively,
the classification model is an audio classification model, and each sample in the sample set is an audio.
8. The apparatus of claim 5, the classification model comprising any of: neural networks, gradient decision trees, bayesian classification, support vector machines, and random forests.
9. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-4.
10. A computing device comprising a memory having executable code stored therein and a processor that, when executing the executable code, implements the method of any of claims 1-4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010407910.7A CN111488950B (en) | 2020-05-14 | 2020-05-14 | Classification model information output method and device |
PCT/CN2021/093386 WO2021228152A1 (en) | 2020-05-14 | 2021-05-12 | Classification model information output |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010407910.7A CN111488950B (en) | 2020-05-14 | 2020-05-14 | Classification model information output method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111488950A CN111488950A (en) | 2020-08-04 |
CN111488950B true CN111488950B (en) | 2021-10-15 |
Family
ID=71811310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010407910.7A Active CN111488950B (en) | 2020-05-14 | 2020-05-14 | Classification model information output method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111488950B (en) |
WO (1) | WO2021228152A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111488950B (en) * | 2020-05-14 | 2021-10-15 | 支付宝(杭州)信息技术有限公司 | Classification model information output method and device |
CN112116028B (en) * | 2020-09-29 | 2024-04-26 | 联想(北京)有限公司 | Model decision interpretation realization method and device and computer equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740416A (en) * | 2016-01-29 | 2016-07-06 | 武汉理工大学 | Multi-agent and ant colony algorithm-based object-oriented remote sensing classification method |
CN107180022A (en) * | 2016-03-09 | 2017-09-19 | 阿里巴巴集团控股有限公司 | object classification method and device |
CN110503155A (en) * | 2019-08-23 | 2019-11-26 | 腾讯科技(深圳)有限公司 | A kind of method and relevant apparatus, server of information classification |
CN110888996A (en) * | 2019-11-22 | 2020-03-17 | 沈阳建筑大学 | Text classification method based on range convolution neural network |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106503236B (en) * | 2016-10-28 | 2020-09-11 | 北京百度网讯科技有限公司 | Artificial intelligence based problem classification method and device |
US10963737B2 (en) * | 2017-08-01 | 2021-03-30 | Retina-Al Health, Inc. | Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images |
US11042810B2 (en) * | 2017-11-15 | 2021-06-22 | Target Brands, Inc. | Similarity learning-based device attribution |
CN110766086B (en) * | 2019-10-28 | 2022-07-22 | 支付宝(杭州)信息技术有限公司 | Method and device for fusing multiple classification models based on reinforcement learning model |
CN111488950B (en) * | 2020-05-14 | 2021-10-15 | 支付宝(杭州)信息技术有限公司 | Classification model information output method and device |
-
2020
- 2020-05-14 CN CN202010407910.7A patent/CN111488950B/en active Active
-
2021
- 2021-05-12 WO PCT/CN2021/093386 patent/WO2021228152A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740416A (en) * | 2016-01-29 | 2016-07-06 | 武汉理工大学 | Multi-agent and ant colony algorithm-based object-oriented remote sensing classification method |
CN107180022A (en) * | 2016-03-09 | 2017-09-19 | 阿里巴巴集团控股有限公司 | object classification method and device |
CN110503155A (en) * | 2019-08-23 | 2019-11-26 | 腾讯科技(深圳)有限公司 | A kind of method and relevant apparatus, server of information classification |
CN110888996A (en) * | 2019-11-22 | 2020-03-17 | 沈阳建筑大学 | Text classification method based on range convolution neural network |
Non-Patent Citations (1)
Title |
---|
自适应转移概率的IMM-FPF算法;秦岭;《武汉轻工大学学报》;20181030;第37卷(第5期);第39-46页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111488950A (en) | 2020-08-04 |
WO2021228152A1 (en) | 2021-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800306B (en) | Intention analysis method, device, display terminal and computer readable storage medium | |
CN111241291B (en) | Method and device for generating countermeasure sample by utilizing countermeasure generation network | |
CN111241287A (en) | Training method and device for generating generation model of confrontation text | |
US20190385068A1 (en) | Program storage medium, apparatus and method provided with ruleset-selectable inference engine | |
CN111538809B (en) | Voice service quality detection method, model training method and device | |
CN111866004B (en) | Security assessment method, apparatus, computer system, and medium | |
CN111078876A (en) | Short text classification method and system based on multi-model integration | |
CN111340233B (en) | Training method and device of machine learning model, and sample processing method and device | |
CN110825969A (en) | Data processing method, device, terminal and storage medium | |
CN111275780B (en) | Character image generation method and device | |
CN117112744B (en) | Assessment method and device for large language model and electronic equipment | |
CN111522916A (en) | Voice service quality detection method, model training method and device | |
CN111488950B (en) | Classification model information output method and device | |
Dahanayaka et al. | Robust open-set classification for encrypted traffic fingerprinting | |
Abady et al. | A siamese-based verification system for open-set architecture attribution of synthetic images | |
CN110879832A (en) | Target text detection method, model training method, device and equipment | |
CN116569210A (en) | Normalizing OCT image data | |
Simao et al. | A technique to reduce the test case suites for regression testing based on a self-organizing neural network architecture | |
US20210117552A1 (en) | Detection of common patterns in user generated content with applications in fraud detection | |
CN117873558A (en) | Self-recognition technology debt detection method and device based on size model fusion | |
CN112463964B (en) | Text classification and model training method, device, equipment and storage medium | |
CN111639718B (en) | Classifier application method and device | |
CN115713669A (en) | Image classification method and device based on inter-class relation, storage medium and terminal | |
CN114913513A (en) | Method and device for calculating similarity of official seal images, electronic equipment and medium | |
CN111881266A (en) | Response method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |