CN110727770A - Intelligent request screening method and device, computer equipment and storage medium - Google Patents
Intelligent request screening method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110727770A CN110727770A CN201910816251.XA CN201910816251A CN110727770A CN 110727770 A CN110727770 A CN 110727770A CN 201910816251 A CN201910816251 A CN 201910816251A CN 110727770 A CN110727770 A CN 110727770A
- Authority
- CN
- China
- Prior art keywords
- user
- target user
- generator
- request
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012216 screening Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims description 120
- 238000005457 optimization Methods 0.000 claims description 41
- 238000011156 evaluation Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 12
- 230000015654 memory Effects 0.000 claims description 11
- 230000008485 antagonism Effects 0.000 claims description 8
- 238000012797 qualification Methods 0.000 claims description 8
- 230000003042 antagnostic effect Effects 0.000 claims description 7
- 125000004122 cyclic group Chemical group 0.000 claims description 6
- 230000003993 interaction Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 16
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000000306 recurrent effect Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012552 review Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 101150049349 setA gene Proteins 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a requested intelligent screening method, a requested intelligent screening device, computer equipment and a storage medium, wherein the requested intelligent screening method comprises the following steps: acquiring request information of a target user through intelligent question answering; acquiring target user information corresponding to the basic information of the target user in a database; judging whether the target user is a high-risk user or not by importing the request information and the target user information into a judgment model; when the target user is judged not to be the high-risk user, judging whether the request limit of the target user is within the limit of the applicable limit threshold of the target user; when the request quota of the target user is judged to be within the threshold range of the applicable quota of the target user, the request corresponding to the request information is directly passed through. The request information of the target client is acquired through the intelligent question-answering mode, the accurate input request information of the user who does not have independent writing capacity can be helped, the non-high-risk user who meets the conditions can be screened out, the money is put fast, the money putting efficiency is improved, and the user experience is improved.
Description
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for intelligently screening requests, a computer device, and a storage medium.
Background
The existing loan users need to fill in materials through service personnel or by themselves, the service personnel help to fill in materials with the help of the service personnel, when the service personnel have problems in the self-quality, unnecessary loss can be caused to loan applicants, if the loan users fill in materials by themselves, some people cannot independently complete the operation of filling in materials because of the limitations of self-culture level and life skills, for example, an input method cannot be used; in addition, in the stage of requesting for auditing, comprehensive judgment needs to be performed on the request content of the user, and the efficiency is low and errors are easy to make mistakes during manual auditing, so that how to avoid quality risks, serve the user who cannot independently complete filling in data, and how to improve auditing efficiency are problems to be solved urgently.
Disclosure of Invention
The main purpose of the present application is to provide a method and an apparatus for intelligently screening requests, a computer device, and a storage medium, which can obtain information of a user in a question-and-answer manner, can obtain request information of a user who cannot independently complete filling in information, and can improve the efficiency of auditing requests corresponding to the request information.
In order to achieve the above object, the present application provides a method for intelligently screening requests, including:
acquiring request information of a target user through intelligent question answering, wherein the request information comprises a request limit and basic information of the target user;
acquiring target user information corresponding to the basic information of the target user in a database;
the method comprises the steps that whether a target user is a high-risk user or not is judged by importing request information and target user information into a judgment model, the judgment model comprises a first generator and a first discriminator, the first generator is used for generating evaluation information of the target user, the first discriminator is used for judging whether the evaluation information reaches a preset high-risk user standard or not, and the evaluation information comprises an applicable quota threshold range of the target user;
when the target user is judged not to be the high-risk user, judging whether the request limit of the target user is within the limit of the applicable limit threshold of the target user;
when the request quota of the target user is judged to be within the threshold range of the applicable quota of the target user, the request corresponding to the request information is directly passed through.
Further, after the request information and the target user information obtained from the database are imported into the judgment model to judge whether the target user is a high-risk user, the method further comprises the following steps:
and if the user is a high-risk user, sending reminding information to a setting unit.
Further, the method for acquiring the request information of the target user through the intelligent question answering comprises the following steps:
obtaining a target user question;
generating a target generation problem corresponding to the target user problem according to a problem optimization model, wherein the problem optimization model is obtained based on generative confrontation network training, the problem optimization model comprises a second generator and a second discriminator, and the second generator is used for generating the target generation problem corresponding to the target user problem;
judging whether the generation quality of the target generation problem is higher than a first preset threshold value or not according to a second judging device, wherein the generation quality is used for indicating the probability that the target generation problem is a standard problem;
and if so, determining a target answer according to the target generation question, wherein the target answer comprises the request information of the target user.
Further, before obtaining the target user question, the method further includes:
obtaining a training set, wherein the training set comprises source user problem-source specification problem pairs which are used for representing source user problems and a set of source specification problems corresponding to the source user problems;
and carrying out generative confrontation network training according to the training set to obtain a problem optimization model.
Further, obtaining the training set comprises:
calculating similarity values of standard questions in a standard data set and source user questions in a user log, wherein the standard data set is used for storing the standard questions, and the user log comprises interaction records of users and a question-answering system;
and taking the source user problem with the similarity value larger than the second preset threshold value with the standard problem as a candidate user problem to determine the problem in the candidate user problem with the semantic consistency with the standard problem so as to obtain a source user problem-source specification problem pair, wherein the standard problem is included in the source specification problem.
Further, the step of performing generative confrontation network training according to the training set to obtain the problem optimization model comprises:
inputting the source user questions in the training set into a second generator so that the second generator performs model training, and obtaining generated questions according to the trained models;
acquiring the generated problems obtained by the second generator, storing the generated problems in a generated data set, and storing the generated data set for storing the generated problems;
inputting the source specification problem in the training set and the generation problem in the generated data set into a second discriminator, so that the second discriminator takes the source specification problem in the training set as a positive sample, and takes the generation problem in the generated data set as a negative sample to carry out model training;
inputting the generated problem generated by the second generator into a second discriminator so that the second discriminator performs attribution rate discrimination on the generated problem, wherein the attribution rate is used for indicating the probability that the problem belongs to the standard data set or the generated data set;
acquiring a discrimination result of the second discriminator on the problem;
inputting the judgment result into a second generator, so that the second generator performs model training according to the generated problem judged by the second judging device and the judgment result, and generates a new generated problem according to the trained model;
acquiring a new generation problem generated by the second generator, and storing the new generation problem in a generation data set;
judging whether a preset qualified condition is reached;
if the qualified conditions are met, ending the antagonistic training of the problem optimization model to obtain a final problem optimization model; if the qualified condition is not met, the second generator and the second discriminator are enabled to carry out cyclic antagonism training.
Further, the step of judging whether the preset qualified condition is reached includes:
judging whether the variation of a judgment result obtained by judging the problem generated by the second generator by the second discriminator is smaller than a third preset threshold value or not;
and if the judgment result is smaller than the third preset threshold value, judging that the preset qualified condition is reached.
The application also provides an intelligent screening device of request, includes:
the acquisition module is used for acquiring request information of a target user through intelligent question answering, wherein the request information comprises a request limit and basic information of the target user;
the extraction module is used for acquiring target user information corresponding to the basic information of the target user from a database;
the screening module is used for judging whether the target user is a high-risk user or not by leading the request information and the target user information into a judging model, wherein the judging model comprises a first generator and a first discriminator, the first generator is used for generating evaluation information of the target user, the first discriminator is used for judging whether the evaluation information reaches a preset high-risk user standard or not, and the evaluation information comprises an applicable quota threshold range of the target user;
the limit judging module is used for judging whether the request limit of the target user is within the limit of the applicable limit threshold of the target user when the target user is judged not to be a high-risk user;
and the approval module is used for directly passing the request when judging that the request quota of the target user is within the threshold range of the applicable quota of the target user.
The present application also provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
The present application also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the intelligent screening method and device, the computer equipment and the storage medium, the request information of the target client is obtained in the intelligent question and answer mode, the quality risk is reduced, the accurate input of the request information by the user without independent writing capability can be helped, the user experience is better, the non-high-risk user meeting the conditions can be screened out for fast paying, the paying efficiency is improved, and the user experience is improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of an intelligent screening method as claimed herein;
FIG. 2 is a block diagram illustrating the structure of an embodiment of the intelligent screening apparatus of the present application;
fig. 3 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, the present application provides a method for intelligently screening requests, including:
s1, acquiring request information of a target user through intelligent question answering, wherein the request information comprises a request amount and basic information of the target user;
s2, acquiring target user information corresponding to the basic information of the target user from the database;
s3, judging whether the target user is a high-risk user or not by importing the request information and the target user information into a judgment model, wherein the judgment model comprises a first generator and a first discriminator, the first generator is used for generating evaluation information of the target user, the first discriminator is used for judging whether the evaluation information reaches a preset high-risk user standard or not, and the evaluation information comprises an applicable quota threshold range of the target user;
s4, when the target user is judged not to be a high-risk user, judging whether the request quota of the target user is within the applicable quota threshold range of the target user;
s5, when the request quota of the target user is judged to be in the threshold range of the applicable quota of the target user, the request corresponding to the request information is directly passed through.
In the step S1, when the request information of the target user is obtained by the intelligent question and answer mode, the user's loan request information may be obtained by voice, which is generally more efficient, the information can be quickly acquired through voice, the efficiency is higher, the request can be more easily made by the target user with low culture degree, the request information can be more conveniently filled, wherein the request quota and the reason for the request can be in multiple groups, for example, there are multiple reasons, each reason corresponds to a quota, the method comprises the steps that a plurality of sub-requests are provided, the total requested amount is the sum of amounts corresponding to a plurality of reasons, the intelligent question answering mode not only comprises voice communication, but also comprises picture uploading, physical scanning, two-dimensional code scanning, magnetic card scanning and other operations in the period, and basic information of a user can be obtained in a user voice question answering mode or can be obtained in a picture uploading mode and a voice question answering confirmation mode.
In the step S2, the target user information is obtained by matching the target user basic information in the database, where the target user information obtained from the database includes information such as the amount of debt and bad credit records of the target client, and in order to improve the timeliness of the target client information, the database may support a third-party database for multiple data together, and multiple lenders upload the basic lending information, bad credit information, and the like of the user in real time, and the information of the user is updated in real time, so that the obtained data is more timely, and the lenders are facilitated to reduce risks.
In the step S3, the request information and the target user information obtained from the database are imported into a judgment model, the first generator identifies the request information and the target user information obtained from the database, and filters out meaningless information to generate evaluation information of the target user meeting the format standard, the first discriminator is configured to compare the evaluation information generated by the first generator with preset standard evaluation information to judge whether the evaluation information meets the standard of the high-risk user, and when the evaluation information meets the standard of the high-risk user, the target user is determined to be the high-risk user. And the judgment model performs characteristic selection on the target customer information acquired from the database, wherein the target customer information is an original characteristic set. Feature selection is a process of selecting an optimal subset from an original feature set, in which the goodness of a given feature subset is measured by a specific evaluation criterion (evaluationcriterion), and by feature selection, redundant and irrelevant features in the original feature set are removed, while preserving useful features.
In the above step S4, in some embodiments, the total request amount may be determined, in other embodiments, when there are multiple request items and the total request amount is not within the threshold range of applicable amount, the request items may be combined into multiple groups, each group includes at least one request item, whether the sum of the request amounts of each group is within the threshold range of applicable amount is determined, and the request amounts of the groups within the threshold range of applicable amount are compared to find the group with the highest request amount, and the group is determined to meet the requirement.
In step S5, when the total requested quota is within the threshold range of the applicable quota of the target user, the request is passed directly; or when the total request quota is not in the applicable quota threshold range of the target user, screening a group, and when the request quota of the group is in the applicable quota threshold range of the target user, directly passing through the request.
Further, after the step S3 of determining whether the target user is a high-risk user by importing the request information and the target user information into the determination model, the method further includes:
and S41, if the user is a high-risk user, sending reminding information to a setting unit.
In the step S41, the setting unit may be a review department responsible for reviewing the abnormal request, a review system responsible for automatically reviewing the abnormal request, a database for counting the abnormal application, and the like, and the reminding information may include a basis for determining that the target user is a high-risk user, request information of the target user, and the like.
Further, the method for acquiring the request information of the target user through the intelligent question answering in step S1 includes:
s11, obtaining a target user question;
s12, generating a target generation problem corresponding to the target user problem according to a problem optimization model, wherein the problem optimization model is obtained based on generative confrontation network training, and comprises a second generator and a second discriminator, and the second generator is used for generating the target generation problem corresponding to the target user problem;
s13, judging whether the generation quality of the target generation problem is higher than a first preset threshold value or not according to a second judging device, wherein the generation quality is used for indicating the probability that the target generation problem is a standard problem;
and S14, if the answer is higher than the first preset threshold, determining a target answer according to the target generation question, wherein the target answer comprises the request information of the target user.
In the above step S11, the target user question is obtained, which may be a target user question obtained by a guide question, for example, ask what service you need? 1 is # # # ##, 2 is # #.
In the above steps S12-S14, the problem optimization model is obtained based on generative confrontation network training, and the problem optimization model includes a second generator and a second discriminator; the second generator is used for generating a target generation problem corresponding to the target user problem, the second discriminator is used for judging whether the generation quality of the target generation problem is higher than a first preset threshold value, the generation quality is used for indicating the probability that the target generation problem is a standard problem, the probability that the target generation problem is the standard problem is more and more accurate through the training of the model, and the first preset threshold value can be improved along with the improvement of the accuracy.
Further, before the step S1 acquires the target user question, the method further includes:
s10, obtaining a training set, wherein the training set comprises source user problem-source specification problem pairs which are used for representing source user problems and a set of source specification problems corresponding to the source user problems;
and S20, performing generative confrontation network training according to the training set to obtain a problem optimization model.
In step S10, in general, the deep neural network is used in the task of supervised learning, i.e. there is a large amount of labeled data for training the model. Therefore, before training a model for a problem optimization model, training data is acquired, and in the present application, for convenience of understanding, a training set is defined as a set of training data, including a source user problem and a source specification problem set corresponding to the source user problem. For each user problem, manually writing a standard problem corresponding to the user problem is difficult to implement, and on one hand, the cost is high; on the other hand, it is difficult to specify a standard writing method. In the application, a training set can be constructed by using the existing standard data set and the user log, wherein the standard data set comprises standard questions, and it can be understood that standard answers corresponding to the standard questions are also stored in the standard data set; the user log comprises interaction records of the user and the question-answering system, including hotspot problem statistics, unsolved problem statistics and the like, so that unsolved problems in the user log can be used as source user problems, and standard problems in the standard data set can be used as source specification problems.
In step S20, the problem optimization model is obtained by performing generative confrontation network training according to the training set, the basic problem optimization model is trained by the generative confrontation network according to the training set to obtain a final problem optimization model, and the final problem optimization model is applied to step S12.
Further, the step S10 of obtaining the training set includes:
s101, calculating similarity values of standard questions in a standard data set and source user questions in a user log, wherein the standard data set is used for storing the standard questions, and the user log comprises interaction records of users and a question-answering system;
s102, the source user problem with the similarity value larger than the second preset threshold value with the standard problem is used as a candidate user problem, so that the problem with the semantic consistency with the standard problem in the candidate user problem is determined, and a source user problem-source specification problem pair is obtained, wherein the standard problem is included in the source specification problem.
In the above step S101, calculating similarity values between the standard questions in the standard data set and the source user questions in the user log;
for the calculation of the similarity value between the standard problem and the source user problem, a Vector Space Model (VSM) based TF-idf (term frequency updated document frequency) algorithm may be adopted, which is implemented as follows:
(1) counting all words w1, w2 and w3 … wn appearing in the corpus according to the word frequency;
(2) each question is represented as an n-dimensional vector: t ═<T1,T2,…,Ti,…,Tn>;
Wherein, TiN is log (M/M), i is not less than 1 and not more than n, n is the TF value which is the frequency of occurrence of the words wi in the problem, M is the number of the problems containing the words wi in the corpus, M is the total number of the problems in the corpus, and log (M/M) is the IDF value. Above TiThe comprehensive expression of (a) reflects the frequency of occurrence of a keyword and the ability of the keyword to distinguish different sentences, namely: the more times a word occurs in a sentence, the more important it is to the sentence.
(3) Let n-dimensional vectors of any two questions be T' and T ", respectively, then the similarity can be calculated as follows using the cosine angle of the two sentence vectors:
it can be understood that, in the present application, the corpus includes the standard problem in the standard data set and the source user problem in the user log, so the similarity value between the standard problem in the standard data set and each source user problem in the user log can be calculated through the above algorithm.
It should be noted that, in practical applications, there are various methods for calculating the similarity value between two questions, for example, a semantic dictionary method, a part-of-speech and word-sequence combination method, a dependency tree method, or an edit distance method may also be used, and the specific application is not limited thereto.
In the step S102, the source user question with the similarity value greater than the second preset threshold value to the standard question is taken as a candidate user question;
after the similarity values between the standard questions and the source user questions are obtained through calculation, the source user questions with the similarity values larger than the second preset threshold value with the standard questions are used as candidate user questions, and it can be understood that the number of the candidate user questions can be 0, 1 or more.
It should be noted that, in practical applications, there are various ways to determine the candidate user questions, for example, the source user questions may be sorted in the order from the big to the small of the similarity value of the standard questions, and a preset number of the source user questions in the top of the sorting are selected as the candidate user questions, so the way to determine the candidate user questions is not limited in this application.
Optionally, in order to ensure semantic consistency between the candidate user question and the standard question, in practical applications, the question with inconsistent semantics in the candidate user question may be removed through manual review, for example, the standard question is "how much money is applied? ", the determined candidate user question includes" how much money is applied for a fee? "," how good is the application fee? "and" how much money is an application fee? "so by reviewing the question of semantic inconsistency among the candidate user questions" how much is the application fee? And removing.
After determining the candidate user problem corresponding to the standard user problem, since the candidate user problem may include a plurality of problems, for example, the standard problem a, and the candidate user problem corresponding thereto includes { user problem a, user problem B, and user problem C }, the source user problem-source specification problem pair obtained includes user problem a-standard problem a, user problem B-standard problem a, and user problem C-standard problem a.
It will be appreciated that different standard user questions may correspond to the same source user question, for example, assuming "when sunset 999 was sold? "and" what is the time of getting on the shelf of sunset red 999? "are all standard questions, source user questions" when you can get a night watch on sunset 999? ", then the source user question corresponds to both standard questions.
It should be noted that, besides the above-mentioned manner of automatically constructing the training set, there are various manners of obtaining the training set in practical applications, for example, a source user question-a source specification question, etc. are manually edited, and the details are not limited herein.
Further, step S20 includes:
s201, inputting the source user questions in the training set into a second generator to enable the second generator to carry out model training, and obtaining generation questions according to a trained model;
s202, obtaining the generated problems obtained by the second generator, storing the generated problems in a generated data set, and storing the generated data set for storing the generated problems;
s203, inputting the source specification problem in the training set and the generation problem in the generated data set into a second discriminator, so that the second discriminator takes the source specification problem in the training set as a positive sample, and takes the generation problem in the generated data set as a negative sample to carry out model training;
s204, inputting the generated problem generated by the second generator into a second discriminator so that the second discriminator discriminates the attribution rate of the generated problem, wherein the attribution rate is used for indicating the probability that the problem belongs to the standard data set or the generated data set;
s205, acquiring a judgment result of the second judgment device on the problem;
s206, inputting the judgment result into a second generator, so that the second generator performs model training according to the generated problem judged by the second judging device and the judgment result, and generates a new generated problem according to the trained model;
s207, acquiring a new generation problem generated by the second generator, and storing the new generation problem in a generation data set;
s208, judging whether a preset qualified condition is reached;
s209, if the qualified conditions are met, ending the antagonistic training of the problem optimization model to obtain a final problem optimization model; and if the qualified condition is not met, the second generator and the second discriminator are enabled to carry out cyclic antagonism training according to the qualified condition.
Steps S201-S202 are to train a problem optimization model according to a training set;
after the training set is obtained, a problem optimization model is trained based on the training set, wherein the problem optimization model comprises a second generator and a second discriminator. In the application, the thought of antagonism training is adopted to alternately train the second generator and the second discriminator, and the finally obtained second generator is used for rewriting the user problem into the standard problem. In particular, the second generator is a probabilistic generating model whose goal is to generate samples (i.e., natural language questions) that are consistent with the distribution of training data (i.e., source specification questions in the training set). The second discriminator is then a classifier whose goal is to accurately discriminate whether a sample (i.e., a natural language question) is from training data or from the second generator. In this way, the second generator, which is constantly optimized such that the second discriminator cannot distinguish the difference between the generated sample and the training data sample, and the second discriminator, which is constantly optimized such that such a difference can be resolved, forms a "confrontation". The second generator and the second discriminator are alternately trained, and finally, the balance is achieved: the second generator can generate samples that completely fit the training data distribution (so that the second discriminator cannot distinguish), while the second discriminator can sensitively distinguish any samples that do not fit the training data distribution.
Referring to fig. 3, for a possible training process provided by the embodiment of the present application, the second generator is responsible for generating a generation problem with the same vocabulary and description style as the standard problem according to the source user problem. A modified version of the Recurrent Neural Network (RNN) can be used as the second generator of the natural language question, with the input being the word sequence obtained by tokenizing the source user questions in the training set, e.g., assuming that the source user question is "late in insurance, can continue to be guaranteed? "pronounce the question to" insurance/pay/late/,/may/keep alive/how? "each word or punctuation in the question is replaced by a vector, that is, a word is mapped into a word vector by a word embedding layer embedding, wherein the word vector is a vector which is formed by mapping each word in a natural language into a fixed length, all the vectors are put together to form a word vector space, and each vector is a point in the space, and the similarity (lexical and semantic) between the words can be judged according to the distance between the words by introducing a" distance "into the space.
And then modeling the sequence dependence relationship between words by using a Bi-directional long-short term memory (Bi-LSTM). The Bi-LSTM outputs the converted word vectors as input to the one-hot entry layer. The one-hot entry layer updates the current state vector in conjunction with the historical state information each time one or more words are selected from the output of the Bi-LSTM without repetition. And finally, calculating the output words according to the current state.
The second discriminator in steps S203-S207 is responsible for discriminating the difference between the generated problem generated by the second generator and the source specification problem, and the second discriminator is trained through the generated problem generated by the second generator, in this application, the quality of the generated problem can be judged from the following three aspects: (1) differences between literal and source specification problems; (2) generating the difference between the problem and the source specification problem in a word vector space, and mainly measuring the difference between the problem and the source specification problem in semantics; (3) and the entity difference of the generation problem and the source specification problem ensures that the entities of the generation problem and the source specification problem are consistent as much as possible. The second discriminator feeds the discrimination result back to the second generator in a gradient form, so that the second generator updates the network parameter value after receiving the gradient, and the quality of the next problem is improved.
It should be noted that, in practical applications, the second generator may also be obtained by using LSTM or other networks instead of the Bi-LSTM network, and the application is not limited in particular. The following will be described by taking an LSRM as an example:
specifically, the recurrent neural network receives a variable-length input sequence (e.g., a natural language sentence, which can be regarded as a word sequence), and calculates each hidden state variable (hidden state) in turn: the ith state variable is calculated by the current input word and the state variable of the previous step: h isi=fh(xi,hi-1) Wherein f ishIs a multi-layer neural network. A simple way of implementation is fh(xi,hi-1)=φh(Uxi+Whi-1) Wherein phihIs a sigmoid function, for example:in practice, more complex LSTM (Long Short-Term Memory) or GRU (gated Recurrent Unit) can be used to pair fhAnd modeling. Meanwhile, after each state variable is determined, the recurrent neural network can continuously generate words and finally form a sequence (i.e., a sentence in natural language). The probability of the ith word being generated is: p (y)i|y1,...,yi-1)=gh(hi,yi-1)=φg(Eyi-1+Wohi) The probability of the whole sentence is:thus, when a recurrent neural network is given a random initial input vector, a sentence is generated, and the parameters therein determine the distribution of natural language that can be generated.
In step S209, the second generator and the second discriminator are made to perform cyclic antagonistic training, and in order to continuously repeat steps S201 to S207, the second discriminator model and the second generator model are set to perform antagonistic training, so as to enhance the ability of the second generator to generate the interference problem and enhance the ability of the second discriminator to discriminate the probability of the problem, where training the second discriminator may specifically be:
inputting K source specification problems in a training set and L generation problems in a generation data set into a second discriminator, so that the second discriminator takes the K source specification problems in the training set as positive example samples and takes the L generation problems in the generation data set as negative example samples to carry out model training; inputting the generated problem generated by the second generator into the second discriminator so that the second discriminator discriminates the generated problem by a home rate, wherein the home rate is used for indicating the probability that the problem belongs to the standard data set or the generated data set. Wherein, K and L are positive integers which are more than or equal to 1, and the specific values of K and L can be the same or different.
In the embodiment of the present application, a Convolutional Neural Network (CNN) may be used as the second discriminator of the natural language problem.
Specifically, for the input sequence, a convolution (convolution) + pooling (Pooling) calculation method is adopted. One way of calculating the convolution is:
whereinThe value of the j-th feature representing the i-th position after convolution, k representing the length of the sliding window. One way of calculating pooling is to find the maximum (max-pooling):this convolution and pooling may be repeated multiple times. Finally, by making z ═ z1,z2,...,zl]Taking softmax, obtaining a discriminant function: dCNN(q)=φ(Wqz+bq). Where D isCNN(q) gives whether a problem is from a standard data set (i.e., is a specification problem or not)Probability.
In the training process of the second generator model and the second discriminator model, antagonism training can be performed through the second generator model and the second discriminator model, so that the capacities of the two models are increased. Therefore, the embodiment of the present application may further include the following steps:
acquiring a discrimination result of the second discriminator on the generated problem, inputting the discrimination result into the second generator, so that the second generator performs model training according to the generated problem discriminated by the second discriminator and the discrimination result, and generates a new generated problem according to the trained model; and acquiring the new generation problem generated by the second generator, and storing the new generation problem in the generation data set. Inputting K source specification problems in a training set and L random generation problems in a generation data set into a second discriminator, so that the second discriminator takes the K source specification problems in the training set as positive example samples and takes the L generation problems in the generation data set as negative example samples to carry out model training; wherein the L generated problem sets contain new generated problems.
And the second discriminator performs model training on the second generator according to the positive example sample and the negative example sample, discriminates the attribution rate of each generated problem generated by the second generator through the trained model of the second discriminator, and sends a discrimination result to the second generator, so that the second generator performs model training according to the new generated problem discriminated by the second discriminator and the discrimination result. And performing cyclic antagonistic training, thereby improving the capacity of the second generator for generating the interference problem and the capacity of the second discriminator for discriminating the attribution rate of the generated problem.
Referring to the above example, specifically, the training method of the second discriminator is as follows:
specifically, a gradient descent algorithm is adopted to solve thetaD:
Specifically, the training method of the second generator may be:
solving theta by adopting gradient descent algorithmG:
According to the REINFORCE algorithm:
and S208, judging whether the training of the problem optimization model reaches qualified conditions or not through a preset rule, and applying the final problem optimization model after the qualified conditions are reached, so that the problem of low later-stage efficiency caused by endless training is avoided, and the working efficiency is improved.
In some embodiments, the step S208 of determining whether the preset qualified condition is reached includes:
s2081, judging whether the variation of a judgment result obtained by judging the problem generated by the second generator by the second discriminator is smaller than a third preset threshold value or not;
and S2082, if the judgment result is smaller than the third preset threshold value, judging that the preset qualified condition is reached.
In steps S2081-S2082, when the variation of the discrimination result obtained by the second discriminator for discriminating the problem generated by the second generator is smaller than a third preset threshold, the input of the problem generated by the second generator to the second discriminator is stopped, and the input of the discrimination result of the second discriminator to the second generator is stopped, so as to end the training of the problem optimization model.
And when the variation of the discrimination result obtained by discriminating the attribution rate of the problem by the second discriminator according to all the obtained positive example samples and negative example samples is smaller than a third preset threshold, stopping inputting the problem set selected from the training set and the generated data set into the second discriminator, and stopping inputting the discrimination result of the second discriminator into the second generator to finish the training of the problem optimization model.
That is, in the antagonistic training, the second discriminator and the second generator are alternately trained until the equilibrium is reached. When the capability of the second generator and the capability of the second discriminator are trained to a certain degree, the attribution rate of the problem generated by the second discriminator for discriminating the second generator tends to be stable. For example, taking a fixed sample as an example, when the second discriminator uses a fixed positive sample and a fixed negative sample as the basis for discriminating the problem, if the probability of the attribution standard data set of the generated problem generated by the second generator discriminated within a preset number of times (for example, 20 times) is in a range of 0.39 to 0.41, it indicates that the discrimination capability of the second discriminator tends to be stable, and at this time, the training of the models of the second generator and the second discriminator may be stopped.
In other embodiments, the step S208 of determining whether the predetermined qualified condition is met includes:
s2083, judging whether the iteration number reaches a fourth preset threshold value;
and S2084, if the iteration number is judged to reach the fourth preset threshold value, judging that the preset qualified condition is reached.
In steps S2083-S2084, when the number of iterations reaches a fourth preset threshold, the input of the problem generated by the second generator to the second discriminator is stopped, and the input of the discrimination result of the second discriminator to the second generator is stopped, so as to end the training of the problem optimization model, where the second generator generates a problem and the discriminator determines that the problem generated by the second generator represents one iteration.
Whether to stop training may also be determined by the number of iterations of the second generator and the second discriminator as a determination condition, where the second generator generates a problem and determines that it determines that the problem generated by the second generator represents an iteration once. For example, 1000 iteration indexes are set, and the training may be stopped after the second generator generates 1000 iterations, or the training may be stopped after the second determiner determines 1000 iterations.
After the training of the problem optimization model is finished, the trained problem optimization model export file is transmitted to the model server, and RESTful Web service is used for providing external service or is directly used as a local file for loading and calling an online service program.
The application also provides an intelligent screening device of request, includes:
the acquisition module 1 is used for acquiring request information of a target user through intelligent question answering, wherein the request information comprises a request amount and basic information of the target user;
the extraction module 2 is used for acquiring target user information corresponding to the basic information of the target user from a database;
the screening module 3 is used for judging whether the target user is a high-risk user or not by leading the request information and the target user information into a judgment model, wherein the judgment model comprises a first generator and a first discriminator, the first generator is used for generating evaluation information of the target user, the first discriminator is used for judging whether the evaluation information reaches a preset high-risk user standard or not, and the evaluation information comprises an applicable quota threshold range of the target user;
the limit judging module 4 is used for judging whether the request limit of the target user is within the limit of the applicable limit threshold of the target user when the target user is judged not to be a high-risk user;
and the approval module 5 is used for directly passing the request corresponding to the request information when judging that the request quota of the target user is within the threshold range of the applicable quota of the target user.
The intelligent screening apparatus of request still includes:
and the reminding module is used for sending reminding information to the setting unit after the screening module 3 judges that the target user is the high-risk user.
The acquisition module 1 includes:
the acquisition unit is used for acquiring a target user question;
the problem optimization model is obtained based on generative confrontation network training and comprises a second generator and a second discriminator, and the second generator is used for generating a target generation problem corresponding to the target user problem;
the judging unit is used for judging whether the generation quality of the target generation problem is higher than a first preset threshold value or not according to the second judging device, and the generation quality is used for indicating the probability that the target generation problem is a standard problem;
and the decision unit is used for determining a target answer according to the target generation question if the generation quality of the target generation question is higher than a first preset threshold, wherein the target answer comprises the request information of the target user.
The intelligent screening apparatus of request still includes:
a second obtaining module, configured to obtain a training set, where the training set includes a source user problem-source specification problem pair, and the source user problem-source specification problem pair is used to represent a set of a source user problem and a source specification problem corresponding to the source user problem;
and the second training module is used for carrying out generative confrontation network training according to the training set to obtain a problem optimization model.
The second acquisition module includes:
the system comprises a calculating unit, a query unit and a query and answer unit, wherein the calculating unit is used for calculating similarity values of standard questions in a standard data set and source user questions in a user log, the standard data set is used for storing the standard questions, and the user log comprises interaction records of users and a question and answer system;
and the second judgment unit is used for taking the source user problem with the similarity value larger than a second preset threshold value with the standard problem as a candidate user problem so as to determine the problem in the candidate user problem, which is consistent with the standard problem in semantics, and further obtain a source user problem-source specification problem pair, wherein the standard problem is included in the source specification problem.
The second training module includes:
the training submodule is used for inputting the source user questions in the training set into the second generator so as to enable the second generator to carry out model training and obtain generated questions according to the trained models;
the set submodule is used for acquiring the generated problems obtained by the second generator, storing the generated problems in a generated data set, and storing the generated data set for storing the generated problems;
the sample acquisition submodule is used for inputting the source specification problem in the training set and the generation problem in the generated data set into the second discriminator so that the second discriminator takes the source specification problem in the training set as a positive sample and takes the generation problem in the generated data set as a negative sample to carry out model training;
the attribution rate judging submodule is used for inputting the generated problem generated by the second generator into the second discriminator so that the second discriminator can judge the attribution rate of the generated problem, wherein the attribution rate is used for indicating the probability that the problem belongs to the standard data set or the generated data set;
the judgment result obtaining submodule is used for obtaining the judgment result of the second judgment device on the generated problem;
the training model submodule is used for inputting the judgment result into the second generator so that the second generator performs model training according to the generation problem judged by the second judger and the judgment result and generates a new generation problem according to the trained model;
the updating data set submodule is used for acquiring the new generation problem generated by the second generator and storing the new generation problem in the generation data set;
the qualification judgment submodule is used for judging whether preset qualification conditions are met;
the ending submodule is used for ending the antagonism training of the problem optimization model if the qualified conditions are met, and obtaining a final problem optimization model;
and the antagonism training sub-module is used for enabling the second generator and the second discriminator to carry out cyclic antagonism training according to the first generator and the second discriminator if the qualified condition is not met.
In some embodiments, the eligibility determination sub-module comprises:
a variable judgment unit for judging whether the variation of the judgment result obtained by the second discriminator judging the problem generated by the second generator is smaller than a third preset threshold value;
and the first qualified judgment unit judges that the first qualified judgment unit reaches the preset qualified condition if the first qualified judgment unit judges that the first qualified judgment unit is smaller than the third preset threshold.
In other embodiments, the eligibility determination sub-module includes:
the iteration judging unit is used for judging whether the iteration times reach a fourth preset threshold value or not;
and the second qualified judgment unit judges that the preset qualified condition is reached if the judgment iteration number reaches a fourth preset threshold value.
In some embodiments, the requested intelligent screening apparatus includes two kinds of qualification judgment submodules at the same time, and the training of the problem optimization model is completed by adopting a prior principle according to the sequence of meeting the qualification conditions, that is, when one kind of qualification judgment submodule previously obtains a result reaching the preset qualification conditions, the other kind of qualification judgment submodule is not used for judgment.
The present application also provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
The embodiment of the present application further provides a computer device, where the computer device may be a server, and an internal structure of the computer device may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of intelligent screening of requests. The request information of the target client is acquired in a question and answer mode, so that the quality risk is reduced, the accurate input of the request information of the user without independent writing capacity can be helped, the user experience is better, the non-high-risk user meeting the conditions can be screened out for fast paying, the paying efficiency is improved, and the user experience is improved.
The present application also proposes a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned requested intelligent screening method.
According to the intelligent screening method and device, the computer equipment and the storage medium, the request information of the target client is obtained in a question and answer mode, the quality risk is reduced, the user who does not have independent writing capacity can be helped to accurately input the request information, the user experience is better, the non-high-risk user meeting the conditions can be screened out for fast paying out, the paying-out efficiency is improved, and the user experience is improved.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.
Claims (10)
1. A method for intelligent screening of requests, comprising:
acquiring request information of a target user through intelligent question answering, wherein the request information comprises a request limit and basic information of the target user;
acquiring target user information corresponding to the basic information of the target user in a database;
importing the request information and the target user information into a judgment model to judge whether the target user is a high-risk user, wherein the judgment model comprises a first generator and a first discriminator, the first generator is used for generating evaluation information of the target user, the first discriminator is used for judging whether the evaluation information reaches a preset high-risk user standard, and the evaluation information comprises an applicable amount threshold range of the target user;
when the target user is judged not to be a high-risk user, judging whether the request limit of the target user is within the limit of the applicable limit threshold of the target user;
and when the request quota of the target user is judged to be within the threshold range of the applicable quota of the target user, directly passing the request corresponding to the request information.
2. The intelligent request screening method of claim 1, wherein after the determination of whether the target user is a high-risk user is performed by importing the request information and target user information obtained from a database into a determination model, the method further comprises:
and if the user is a high-risk user, sending reminding information to a setting unit.
3. The intelligent request screening method according to claim 1, wherein the method for obtaining the request information of the target user by intelligent question answering comprises:
obtaining a target user question;
generating a target generation problem corresponding to the target user problem according to a problem optimization model, wherein the problem optimization model is obtained based on generative confrontation network training, the problem optimization model comprises a second generator and a second discriminator, and the second generator is used for generating the target generation problem corresponding to the target user problem;
judging whether the generation quality of the target generation problem is higher than a first preset threshold value or not according to the second discriminator, wherein the generation quality is used for indicating the probability that the target generation problem is a standard problem;
and if so, determining a target answer according to the target generation question, wherein the target answer comprises the request information of the target user.
4. The intelligent method of claim 3, wherein prior to obtaining the target user question, the method further comprises:
obtaining a training set, wherein the training set comprises source user problem-source specification problem pairs, and the source user problem-source specification problem pairs are used for representing a set of source user problems and source specification problems corresponding to the source user problems;
and carrying out generative confrontation network training according to the training set to obtain the problem optimization model.
5. The intelligent method of claim 4, wherein the obtaining the training set comprises:
calculating similarity values of standard questions in a standard data set and source user questions in a user log, wherein the standard data set is used for storing the standard questions, and the user log comprises interaction records of users and a question-answering system;
and taking the source user question with the similarity value larger than a second preset threshold value with the standard question as a candidate user question to determine the question with the semantic consistent with the standard question in the candidate user question so as to obtain the source user question-source specification question pair, wherein the standard question is contained in the source specification question.
6. The intelligent method for soliciting screening of claim 5 wherein the step of developing a generative confrontation network training from the training set to obtain the problem optimization model comprises:
inputting the source user questions in the training set into the second generator so that the second generator performs model training, and obtaining generated questions according to the trained models;
acquiring the generated problems obtained by the second generator, and storing the generated problems in a generated data set, wherein the generated data set is used for storing the generated problems;
inputting the source specification questions in the training set and the generation questions in the generation data set to the second discriminator, so that the second discriminator performs model training by taking the source specification questions in the training set as positive example samples and taking the generation questions in the generation data set as negative example samples;
inputting the generated problem generated by the second generator into the second discriminator so that the second discriminator discriminates the generated problem by a home rate, wherein the home rate is used for indicating the probability that the problem belongs to the standard data set or the generated data set;
obtaining a discrimination result of the second discriminator on the generated problem;
inputting the judgment result into the second generator, so that the second generator performs model training according to the generated problem judged by the second judging device and the judgment result, and generates a new generated problem according to the trained model;
acquiring a new generation question generated by the second generator, and storing the new generation question in the generation data set;
judging whether a preset qualified condition is reached;
if the qualified conditions are met, ending the antagonistic training of the problem optimization model to obtain a final problem optimization model; and if the qualified condition is not met, the second generator and the second discriminator are enabled to carry out cyclic antagonism training according to the qualified condition.
7. The intelligent method for request screening according to claim 6, wherein the step of determining whether a predetermined qualification is met comprises:
judging whether the variation of a judgment result obtained by judging the problem generated by the second generator by the second discriminator is smaller than a third preset threshold value or not;
and if the judgment result is smaller than the third preset threshold value, judging that the preset qualified condition is reached.
8. An intelligent screening apparatus for requests, comprising:
the acquisition module is used for acquiring request information of a target user through intelligent question answering, wherein the request information comprises a request amount and basic information of the target user;
the extraction module is used for acquiring target user information corresponding to the basic information of the target user from a database;
the screening module is used for judging whether the target user is a high-risk user or not by leading the request information and the target user information into a judging model, wherein the judging model comprises a first generator and a discriminator, the first generator is used for generating evaluation information of the target user, the discriminator is used for judging whether the evaluation information reaches a preset high-risk user standard or not, and the evaluation information comprises an applicable quota threshold range of the target user;
the limit judging module is used for judging whether the request limit of the target user is within the limit of the applicable limit threshold of the target user when the target user is judged not to be a high-risk user;
and the approval module is used for directly passing the request when judging that the request quota of the target user is within the threshold range of the applicable quota of the target user.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910816251.XA CN110727770A (en) | 2019-08-30 | 2019-08-30 | Intelligent request screening method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910816251.XA CN110727770A (en) | 2019-08-30 | 2019-08-30 | Intelligent request screening method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110727770A true CN110727770A (en) | 2020-01-24 |
Family
ID=69218904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910816251.XA Pending CN110727770A (en) | 2019-08-30 | 2019-08-30 | Intelligent request screening method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110727770A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115344869A (en) * | 2022-08-10 | 2022-11-15 | 中国电信股份有限公司 | Risk determination method and device, storage medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104616194A (en) * | 2014-12-31 | 2015-05-13 | 腾讯科技(深圳)有限公司 | Data processing method and payment platform |
CN107886425A (en) * | 2017-10-25 | 2018-04-06 | 上海壹账通金融科技有限公司 | Credit evaluation method, apparatus, equipment and computer-readable recording medium |
CN108389121A (en) * | 2018-02-07 | 2018-08-10 | 平安普惠企业管理有限公司 | Loan data processing method, device, computer equipment and storage medium |
CN109509085A (en) * | 2018-11-27 | 2019-03-22 | 平安科技(深圳)有限公司 | Information processing method, device, computer equipment and storage medium before borrowing |
CN109584048A (en) * | 2018-11-30 | 2019-04-05 | 上海点融信息科技有限责任公司 | The method and apparatus that risk rating is carried out to applicant based on artificial intelligence |
CN109685639A (en) * | 2018-08-21 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Loan checking method, device, equipment and computer readable storage medium |
CN110009479A (en) * | 2019-03-01 | 2019-07-12 | 百融金融信息服务股份有限公司 | Credit assessment method and device, storage medium, computer equipment |
-
2019
- 2019-08-30 CN CN201910816251.XA patent/CN110727770A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104616194A (en) * | 2014-12-31 | 2015-05-13 | 腾讯科技(深圳)有限公司 | Data processing method and payment platform |
CN107886425A (en) * | 2017-10-25 | 2018-04-06 | 上海壹账通金融科技有限公司 | Credit evaluation method, apparatus, equipment and computer-readable recording medium |
CN108389121A (en) * | 2018-02-07 | 2018-08-10 | 平安普惠企业管理有限公司 | Loan data processing method, device, computer equipment and storage medium |
CN109685639A (en) * | 2018-08-21 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Loan checking method, device, equipment and computer readable storage medium |
CN109509085A (en) * | 2018-11-27 | 2019-03-22 | 平安科技(深圳)有限公司 | Information processing method, device, computer equipment and storage medium before borrowing |
CN109584048A (en) * | 2018-11-30 | 2019-04-05 | 上海点融信息科技有限责任公司 | The method and apparatus that risk rating is carried out to applicant based on artificial intelligence |
CN110009479A (en) * | 2019-03-01 | 2019-07-12 | 百融金融信息服务股份有限公司 | Credit assessment method and device, storage medium, computer equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115344869A (en) * | 2022-08-10 | 2022-11-15 | 中国电信股份有限公司 | Risk determination method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110019732B (en) | Intelligent question answering method and related device | |
CN109299245B (en) | Method and device for recalling knowledge points | |
CN110390408B (en) | Transaction object prediction method and device | |
CN111159367B (en) | Information processing method and related equipment | |
CN109376222A (en) | Question and answer matching degree calculation method, question and answer automatic matching method and device | |
CN109300050A (en) | Insurance method for pushing, device and storage medium based on user's portrait | |
CN114330354A (en) | Event extraction method and device based on vocabulary enhancement and storage medium | |
CN111259647A (en) | Question and answer text matching method, device, medium and electronic equipment based on artificial intelligence | |
CN112148831B (en) | Image-text mixed retrieval method and device, storage medium and computer equipment | |
CN111753167A (en) | Search processing method, search processing device, computer equipment and medium | |
CN111507573A (en) | Business staff assessment method, system, device and storage medium | |
CN114358657B (en) | Post recommendation method and device based on model fusion | |
CN113239699A (en) | Depth knowledge tracking method and system integrating multiple features | |
CN117891939A (en) | Text classification method combining particle swarm algorithm with CNN convolutional neural network | |
CN116521936A (en) | Course recommendation method and device based on user behavior analysis and storage medium | |
CN112434211A (en) | Data processing method, device, storage medium and equipment | |
CN117725191B (en) | Guide information generation method and device of large language model and electronic equipment | |
CN114491023A (en) | Text processing method and device, electronic equipment and storage medium | |
CN110727770A (en) | Intelligent request screening method and device, computer equipment and storage medium | |
CN117668199A (en) | Intelligent customer service question-answer prediction and recommendation dialogue generation method and device | |
CN108304568A (en) | A kind of real estate Expectations big data processing method and system | |
CN112184292A (en) | Marketing method and device based on artificial intelligence decision tree | |
CN117876090A (en) | Risk identification method, electronic device, storage medium, and program product | |
CN112989054B (en) | Text processing method and device | |
CN111382246B (en) | Text matching method, matching device, terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200124 |
|
WD01 | Invention patent application deemed withdrawn after publication |