CN112800170A - Question matching method and device and question reply method and device - Google Patents
Question matching method and device and question reply method and device Download PDFInfo
- Publication number
- CN112800170A CN112800170A CN201911115389.3A CN201911115389A CN112800170A CN 112800170 A CN112800170 A CN 112800170A CN 201911115389 A CN201911115389 A CN 201911115389A CN 112800170 A CN112800170 A CN 112800170A
- Authority
- CN
- China
- Prior art keywords
- question
- layer
- matched
- semantic similarity
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 109
- 238000004364 calculation method Methods 0.000 claims abstract description 157
- 230000003993 interaction Effects 0.000 claims abstract description 69
- 230000002776 aggregation Effects 0.000 claims abstract description 44
- 238000004220 aggregation Methods 0.000 claims abstract description 44
- 238000012549 training Methods 0.000 claims description 119
- 238000004458 analytical method Methods 0.000 claims description 43
- 230000002457 bidirectional effect Effects 0.000 claims description 36
- 238000013528 artificial neural network Methods 0.000 claims description 30
- 230000000306 recurrent effect Effects 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 24
- 238000010606 normalization Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 17
- 238000000605 extraction Methods 0.000 claims description 13
- 238000003860 storage Methods 0.000 claims description 11
- 238000009826 distribution Methods 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 abstract description 34
- 238000013473 artificial intelligence Methods 0.000 abstract description 5
- 238000003058 natural language processing Methods 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 18
- 238000013135 deep learning Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 230000014509 gene expression Effects 0.000 description 9
- 230000036651 mood Effects 0.000 description 9
- 238000013136 deep learning model Methods 0.000 description 8
- 238000011161 development Methods 0.000 description 7
- 230000018109 developmental process Effects 0.000 description 7
- 239000013598 vector Substances 0.000 description 7
- 239000003795 chemical substances by application Substances 0.000 description 5
- 238000010219 correlation analysis Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 239000013604 expression vector Substances 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 210000002268 wool Anatomy 0.000 description 3
- 238000004140 cleaning Methods 0.000 description 2
- 238000000265 homogenisation Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 241000238558 Eucarida Species 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
The embodiment of the application provides a question matching method and device and a question reply method and device, and relates to the technical field of artificial intelligence. The problem matching method comprises the following steps: and acquiring the problem to be matched, and matching the problem to be matched by using the dictionary tree. And if the matching fails, searching a plurality of candidate problems similar to the problem to be matched from a preset problem library, and matching the problem to be matched with the candidate problems by using a trained semantic similarity calculation model based on the problem pair. Wherein the model comprises an encoding layer, a local interaction layer and an aggregation layer. Therefore, the semantic similarity of the problem pairs is calculated by using the trained semantic similarity calculation model based on the problem pairs, the semantics are used as reference factors for searching similar candidate problems, and the accuracy of determining the similar problems is improved. The embodiment of the application is used for the question answering system, the question answering system runs on electronic equipment such as a mobile phone, and intelligent answering is achieved through a natural language processing technology.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for matching a problem, and a method and an apparatus for replying a problem.
Background
The question-answering technology is a technology that a user proposes a question and a machine automatically answers in a human-computer interaction process. Common Question-answering techniques include FAQ (frequencytly assigned Questions, Question-answering technique based on Frequently Asked Question lists) and CQA (Community-based Question-answering technique based on Community Question-answering). The FAQ stores frequently asked questions and corresponding answers in a list form, searches similar frequently asked questions from the list after receiving questions proposed by a user, and replies to the user by using the answers corresponding to the frequently asked questions.
In the related art, the stored frequently asked questions are used as training data to train the classifier, the trained classifier is used to classify the questions provided by the user, and then similar frequently asked questions are searched. Where the classifier classifies questions based on the characters included in the questions, there is no semantic reference to the questions, and thus the determination of similar questions is not accurate enough.
Disclosure of Invention
The application provides a problem matching method and device and a problem replying method and device, so that the semantic similarity of a problem pair is calculated by using a trained semantic similarity calculation model based on the problem pair, the semantics are used as reference factors for searching similar candidate problems, and the accuracy of determining the similar problems is improved.
In a first aspect, the present application provides a method for matching a question, the method including: acquiring a problem to be matched; matching the problems to be matched by using a dictionary tree; if the matching fails, searching a plurality of candidate questions similar to the question to be matched from a preset question bank; matching the problem to be matched with the candidate problem by using a trained semantic similarity calculation model based on a problem pair; the semantic similarity calculation model based on the problem pairs comprises an input layer, an encoding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the encoding layer is used for performing semantic analysis on the problems in the problem pairs, determining the importance of each word included in the problems and learning the structural features of the problems, the local interaction layer is used for performing semantic relevance analysis on two of the problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
Optionally, before the obtaining the question to be matched, the method further includes: and updating the dictionary tree, the preset problem library and the corresponding index library.
Optionally, the matching the to-be-matched question by using a dictionary tree includes: removing the tone words in the question to be matched; unifying punctuation marks in the problem to be matched; performing synonym replacement on the problem to be matched to generate a plurality of similar problems of the problem to be matched; and respectively matching each similar problem by using the dictionary tree.
Optionally, the matching the question to be matched and the candidate question by using the trained semantic similarity calculation model based on the question pair includes: forming a problem pair by combining each candidate problem with the problem to be matched; inputting the question pair into the question pair-based semantic similarity calculation model; and matching the question to be matched with the candidate question according to the semantic similarity corresponding to each question pair.
Optionally, the semantic similarity calculation model based on the question pair is trained by the following steps: acquiring a reference problem pair and a reference semantic similarity corresponding to the reference problem pair; wherein the reference problem pair comprises a first reference problem and a second reference problem; performing word segmentation processing on the first reference problem and the second reference problem respectively to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem; determining a part of speech corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set; determining synonyms corresponding to each of the reference words in the first and second sets of reference words to generate a first set of reference synonyms corresponding to the first set of reference words and a second set of reference synonyms corresponding to the second set of reference words; inputting the first set of reference words, the second set of reference words, the first set of reference parts of speech, the second set of reference parts of speech, the first set of reference synonyms and the second set of reference synonyms into the coding layer of the problem pair-based semantic similarity calculation model; training parameters of the semantic similarity calculation model based on the question pairs according to the output of the output layer of the semantic similarity calculation model based on the question pairs and the reference semantic similarity; and when the accuracy of the semantic similarity calculation model based on the problem pair is greater than a preset threshold value, finishing the training of the semantic similarity calculation model based on the problem pair.
Optionally, the encoding layer includes a bidirectional recurrent neural network layer, a first normalization layer, and a stacked bidirectional self-attention layer.
Optionally, the local interaction layer comprises a bidirectional multi-angle similarity analysis layer and a second normalization layer.
In a second aspect, the present application provides a method for replying to a question, the method comprising: acquiring a question to be replied; determining candidate questions matched with the questions to be replied by using the matching method of the questions; and replying by using the candidate answer corresponding to the candidate question.
In a third aspect, the present application provides a problem matching apparatus, the apparatus comprising: the first acquisition module is used for acquiring the problems to be matched; the first matching module is used for matching the problems to be matched by using a dictionary tree; the retrieval module is used for retrieving a plurality of candidate questions similar to the question to be matched from a preset question bank when the first matching module fails to match; the second matching module is used for matching the problem to be matched with the candidate problem by using a trained semantic similarity calculation model based on the problem pair; the semantic similarity calculation model based on the problem pairs comprises an input layer, an encoding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the encoding layer is used for performing semantic analysis on the problems in the problem pairs, determining the importance of each word included in the problems and learning the structural features of the problems, the local interaction layer is used for performing semantic relevance analysis on two of the problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
Optionally, the apparatus further comprises: and the updating module is used for updating the dictionary tree, the preset problem library and the corresponding index library.
Optionally, the first matching module includes: the removing submodule is used for removing the tone words in the problem to be matched; the unification submodule is used for unifying punctuations in the problem to be matched; the replacing submodule is used for carrying out synonym replacement on the problem to be matched so as to generate a plurality of similar problems of the problem to be matched; and the first matching submodule is used for respectively matching each similar problem by using the dictionary tree.
Optionally, the second matching module includes: the group pair submodule is used for respectively forming a problem pair by each candidate problem and the problem to be matched; an input submodule for inputting the question pairs into the question pair-based semantic similarity calculation model; and the second matching submodule is used for matching the problem to be matched with the candidate problem according to the semantic similarity corresponding to each problem pair.
Optionally, the apparatus further comprises: the second acquisition module is used for acquiring a reference problem pair and the reference semantic similarity corresponding to the reference problem pair; wherein the reference problem pair comprises a first reference problem and a second reference problem; a processing module, configured to perform word segmentation processing on the first reference problem and the second reference problem, respectively, so as to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem; a first determining module, configured to determine a part of speech corresponding to each reference word in the first reference word set and the second reference word set, so as to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set; a second determining module, configured to determine a synonym corresponding to each of the reference words in the first reference word set and the second reference word set, so as to generate a first reference synonym set corresponding to the first reference word set and a second reference synonym set corresponding to the second reference word set; an input module, configured to input the first set of reference words, the second set of reference words, the first set of reference parts of speech, the second set of reference parts of speech, the first set of reference synonyms, and the second set of reference synonyms into the encoding layer of the problem pair-based semantic similarity calculation model; the training module is used for training the parameters of the semantic similarity calculation model based on the question pairs according to the output of the output layer of the semantic similarity calculation model based on the question pairs and the reference semantic similarity; and the completion module is used for completing the training of the semantic similarity calculation model based on the question pair when the accuracy of the semantic similarity calculation model based on the question pair is greater than a preset threshold value.
Optionally, the encoding layer includes a bidirectional recurrent neural network layer, a first normalization layer, and a stacked bidirectional self-attention layer.
Optionally, the local interaction layer comprises a bidirectional multi-angle similarity analysis layer and a second normalization layer.
In a fourth aspect, the present application provides a device for answering a question, the device comprising: the third acquisition module is used for acquiring the problem to be replied; a third determining module, configured to determine, by using the aforementioned problem matching method, a candidate problem that matches the problem to be replied; and the reply module is used for replying by using the candidate answer corresponding to the candidate question.
In a fifth aspect, the present application provides a question-answering system, comprising: the question-answering interface is used for receiving input contents of a user and displaying generated reply contents; the distribution agent is used for distributing the input content of the user to a corresponding reply device according to the type of the input content of the user; the reply device for the question is used for receiving the question to be replied sent by the distribution agent and determining a corresponding answer from a preset question library.
In a sixth aspect, the present application provides an electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the computer program, performing the steps of: acquiring a problem to be matched; matching the problems to be matched by using a dictionary tree; if the matching fails, searching a plurality of candidate questions similar to the question to be matched from a preset question bank; matching the problem to be matched with the candidate problem by using a trained semantic similarity calculation model based on a problem pair; the semantic similarity calculation model based on the problem pairs comprises an input layer, an encoding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the encoding layer is used for performing semantic analysis on the problems in the problem pairs, determining the importance of each word included in the problems and learning the structural features of the problems, the local interaction layer is used for performing semantic relevance analysis on two of the problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
Optionally, before the electronic device obtains the question to be matched, the following steps are further performed: and updating the dictionary tree, the preset problem library and the corresponding index library.
Optionally, the electronic device matches the to-be-matched question by using a dictionary tree, and specifically includes the following steps: removing the tone words in the question to be matched; unifying punctuation marks in the problem to be matched; performing synonym replacement on the problem to be matched to generate a plurality of similar problems of the problem to be matched; and respectively matching each similar problem by using the dictionary tree.
Optionally, the electronic device matches the to-be-matched question with the candidate question using a trained semantic similarity calculation model based on a question pair, and specifically includes the following steps: forming a problem pair by combining each candidate problem with the problem to be matched; inputting the question pair into the question pair-based semantic similarity calculation model; and matching the question to be matched with the candidate question according to the semantic similarity corresponding to each question pair.
Optionally, when training the semantic similarity calculation model based on the question pair, the electronic device performs the following steps: acquiring a reference problem pair and a reference semantic similarity corresponding to the reference problem pair; wherein the reference problem pair comprises a first reference problem and a second reference problem; performing word segmentation processing on the first reference problem and the second reference problem respectively to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem; determining a part of speech corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set; determining synonyms corresponding to each of the reference words in the first and second sets of reference words to generate a first set of reference synonyms corresponding to the first set of reference words and a second set of reference synonyms corresponding to the second set of reference words; inputting the first set of reference words, the second set of reference words, the first set of reference parts of speech, the second set of reference parts of speech, the first set of reference synonyms and the second set of reference synonyms into the coding layer of the problem pair-based semantic similarity calculation model; training parameters of the semantic similarity calculation model based on the question pairs according to the output of the output layer of the semantic similarity calculation model based on the question pairs and the reference semantic similarity; and when the accuracy of the semantic similarity calculation model based on the problem pair is greater than a preset threshold value, finishing the training of the semantic similarity calculation model based on the problem pair.
Optionally, the encoding layer includes a bidirectional recurrent neural network layer, a first normalization layer, and a stacked bidirectional self-attention layer.
Optionally, the local interaction layer comprises a bidirectional multi-angle similarity analysis layer and a second normalization layer.
In a seventh aspect, the present application provides an electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the computer program, performing the steps of: acquiring a question to be replied; determining candidate questions that match the question to be answered using the method of the first aspect; and replying by using the candidate answer corresponding to the candidate question.
In an eighth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform the method according to the first or second aspect.
Drawings
FIG. 1 is a schematic diagram of a prior art class-based question-answering system;
FIG. 2a is a schematic flow chart of a method for constructing a deep learning model in the prior art;
FIG. 2b is a schematic flow chart of a deep learning model-based question answering method in the prior art;
FIG. 3 is a flow chart illustrating a method for matching a problem according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating an exemplary unification process according to the present disclosure;
FIG. 5 is a flowchart illustrating a training procedure of a semantic similarity calculation model based on question pairs according to an embodiment of the present disclosure;
FIG. 6 is a schematic flow chart illustrating the generation of a training positive example and a training negative example according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a method for matching questions provided by an embodiment of the present application;
FIG. 8 is a schematic illustration of the initial load of FIG. 7;
FIG. 9 is a schematic structural diagram of a semantic similarity calculation model based on question pairs according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating a problem recovery process according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a matching device for a problem addressed by an embodiment of the present application;
fig. 12 is a schematic structural diagram of a first matching module according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a second matching module according to an embodiment of the present application;
FIG. 14 is a schematic view of another structure of a matching device for the problem addressed by the embodiments of the present application;
FIG. 15 is a schematic structural diagram of a recovery device for a problem addressed by an embodiment of the present application;
fig. 16 is a schematic structural diagram of a question answering system according to an embodiment of the present application; and
fig. 17 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A question matching method and apparatus, a question reply method and apparatus, a question answering system, and a computer-readable storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
In order to more clearly describe the matching method of the problems provided by the embodiments of the present application, the following first describes the related art.
The Question Answering technology, namely Question Answering (QA), refers to a machine which automatically answers questions posed by a user, so as to meet the requirements of the user on related information. The mobile life assistant is typically applied to various application scenarios such as enterprise customer service, smart information, etc., and the typical application thereof is represented by Siri, Google Now, microsoft Cortana, etc. of apple inc.
The question-answering technology is an intelligent man-machine interaction technology and can be realized on various electronic devices. Specifically, the electronic device may include devices such as a mobile terminal (mobile phone), a smart screen, an unmanned aerial Vehicle, an ICV (Intelligent Connected Vehicle), an Intelligent car (smart/Intelligent car), or an in-Vehicle device.
In conventional search techniques, keywords are input and a list of related documents is output. Unlike conventional search techniques, the input to question-and-answer techniques is typically a natural language form of question, and the output is a concise answer, or a list of multiple possible answers.
In recent years, with the rapid development of artificial intelligence, natural language processing and mobile internet technology, the question-answering technology gradually develops from a simple keyword as a core to a deep question-answering technology as a core.
Based on the difference of target data source types, common question answering technology systems can be roughly divided into: the system comprises a question-answering system based on structured data, a question-answering system based on free text and a question-answering system based on question-answering pairs. Wherein, the question answering system based on question answering pairs comprises: a frequently asked question list-based question-answering system (FAQ) and a community question-answering-based question-answering system (CQA).
The frequently asked question list-based question answering system (FAQ) has the advantages of large data volume, high question quality, good data structure and the like, so that the system is very suitable for mobile life assistant applications.
In the prior art, a plurality of implementation modes exist for a question answering system based on a frequently asked question list.
The first implementation manner is to use a trained classifier to classify questions posed by the user, further search similar frequently asked questions from the similar questions, and reply to the user with answers corresponding to the frequently asked questions. Fig. 1 is a schematic structural diagram of a prior art question-answering system based on classification. As shown in fig. 1, a conventional question-answering system based on classification divides frequently asked questions stored in a question file and corresponding answers stored in an answer file into different question groups. And training the classifier by taking different problem groups as training data, so that the classifier can classify different types of problems.
In the using process, a user inputs a question through a user interface module, the classifier determines the category of the question according to the question proposed by the user, and then a plurality of frequently asked questions similar to the question are searched in the same category. And selecting answers which are most matched with the questions proposed by the user from the answers corresponding to the frequently asked questions, and displaying the answers to the client through a user interface module.
In the first implementation manner, the classifier is implemented based on the number of characters and characters when classifying the problem, and does not solve the semantic gap problem existing in the conventional search technology. And along with the increase of the number of frequently asked questions, the accuracy of the classifier is lower and lower, and the classifier cannot be suitable for different types of application scenes.
In addition, as time shifts, new question-answer pairs are continuously generated, old question-answer pairs are deleted or partial question-answer pairs are updated, and the classifier cannot be updated in real time according to the update of the question-answer pairs after training is completed, so that similar frequently asked questions retrieved by the classifier are older.
The second implementation mode is that a question and answer data set is captured from the internet and stored as a frequently asked question list. And searching a plurality of frequently asked questions similar to the questions posed by the user from the question-answer data set and answers corresponding to the questions, matching the answers with the questions posed by the user by using a question-answer matching model based on a neural network, and determining the answer which is most matched with the questions posed by the user. Fig. 2a is a schematic flow chart of a method for constructing a deep learning model in the prior art. Fig. 2b is a schematic flow chart of a deep learning model-based question answering method in the prior art. As shown in fig. 2a, in the deep learning model building process, network question-answer data is captured from the network, stored in the relational database as a question-answer data set, and a full-text retrieval service is established. Performing Chinese word segmentation on the problems stored in the relational database, generating a BOW (Bag of Words) vector, a TF-IDF (Term Frequency-Inverse Document Frequency) value and a word2vec word vector according to the result of Chinese word segmentation, and further generating a corresponding text expression vector. And taking the question-answer data in the question-answer data set as training data for training a question-answer matching model based on a neural network. As shown in fig. 2b, after receiving the user question, using a full-text search service provided by the relational database, a plurality of questions similar to the user question are generated to generate a similar question set, and a text representation vector of each similar question in the similar question set is obtained. And performing Chinese word segmentation on the user question, generating a BOW vector, a TF-IDF value and a word2dev word vector according to a result after Chinese word segmentation, and further generating a text expression vector corresponding to the user question.
And generating a plurality of similar questions by calculating the cosine similarity of the text representation vector of the similar questions and the text representation vector of the user question, and taking answers corresponding to the similar questions as candidate answers. And inputting the user question and the candidate answers into a trained question-answer matching model based on the neural network, and determining the answer of the user question.
In the second implementation, as the business needs and time goes by, the questions in the question and answer data set need to be continuously added or deleted (for example, new questions need to be added to the question and answer data set, or some questions have repetition or unnecessary situations and need to be deleted from the question and answer data set). After the question-answer data set changes, the steps of the construction method of the deep learning model need to be executed again, the full-text retrieval service, the text expression vector and the question-answer matching model based on the neural network can be updated, and real-time dynamic updating cannot be realized. That is, over time, the previously established full-text retrieval service, the text expression vector and the neural network-based question-answer matching model are old, and the determined answers to the questions of the user are also old and inconsistent with the actual situation.
In addition, the question-answer matching model based on the neural network is not suitable for the question-answer system based on the frequently asked questions. Specifically, for the question-answer matching model, there is a difference between the semantic spaces of the question and the answer, and it is difficult to set reasonable corresponding semantic features, resulting in inaccurate question-answer matching. And along with the change of the application scene, the answer corresponding to the same question changes, so that the question-answer matching model is closely associated with the application scene and has no universality.
Based on the above description of the related art, it can be known that, when a similar question to a user question is found in a stored question set in the related art, the determination of the similar question is not accurate enough.
In order to solve the above problem, an embodiment of the present application provides a problem matching method, and fig. 3 is a flowchart illustrating the problem matching method provided in the embodiment of the present application. As shown in fig. 3, the method includes:
and step S101, acquiring the problem to be matched.
Based on the above description of the question-answering system, it can be known that the question to be matched is a question posed by the user.
The method for acquiring the problem proposed by the user can be various, and can be realized through a human-computer interaction interface, and the problem proposed by the user through the human-computer interaction interface can be realized through various input means such as keyboard input, voice input and the like.
And step S102, matching the problems to be matched by using the dictionary tree.
It should be noted that, in the embodiment of the present application, the problem is matched in two aspects of character and semantic, specifically, the dictionary tree is used for matching the character, and the semantic similarity calculation model based on the problem pair is used for semantic matching.
The dictionary tree (trie tree), also called word search tree, is a tree structure, the root node does not contain characters, each node except the root node only contains one character, the characters passing through the path from the root node to a certain node are connected, and the characters contained in all the child nodes of each node are different for the character string corresponding to the node.
It can be understood that, in the embodiment of the present application, a plurality of frequently asked questions are stored in advance, and each frequently asked question is filled into a dictionary tree as one character string, so that a node in the dictionary tree may correspond to the frequently asked question. Splitting a character string corresponding to the problem to be matched into a plurality of characters, sequentially matching the characters by using a dictionary tree, and if a node matched with the character string corresponding to the problem to be matched can be found in the dictionary tree, determining that the problem to be matched and the frequently asked question corresponding to the node are matched.
It should be noted that, if matching of the problem to be matched can be completed by using the dictionary tree, character matching of the problem to be matched is completed. That is, the question posed by the user is the same as a candidate question in content, and then in the subsequent reply process, the answer corresponding to the candidate question can be directly used for answering the question posed by the user.
Step S103, if the matching fails, a plurality of candidate questions similar to the question to be matched are searched from a preset question bank.
It should be noted that, if the matching of the question to be matched is not completed in step S102, it is indicated that there is no candidate question in the pre-stored frequently asked questions that is the same as the question posed by the user, and the question to be matched needs to be matched in a semantic analysis manner.
It can be understood that there are a large number of frequently asked questions in the preset question bank, and if each frequently asked question is input into the semantic similarity calculation model together with the question to be matched, the calculation amount is too large, and the efficiency is very low.
In order to reduce the calculation amount in the subsequent process, the frequently asked questions in the preset question bank can be primarily screened. Specifically, a plurality of candidate questions similar to the question to be matched are retrieved from a preset question library, and then each candidate question and the question to be matched are input into a semantic similarity calculation model together, so that the candidate question matched with the question to be matched is determined.
Different from the aforementioned matching by using the dictionary tree, when the preset problem library is searched, only the frequently asked problems having the same keywords as the problems to be matched need to be searched, and the content form does not need to be completely consistent. It can be understood that the larger the number of the same keywords of the frequently asked questions in the preset question bank and the questions to be matched, the higher the similarity of the frequently asked questions and the questions to be matched in the content form, so that the search results are inversely sorted according to the similarity in the embodiment of the present application, and then the first N frequently asked questions are taken as candidate questions, where N is a positive integer.
And step S104, matching the problem to be matched with the candidate problem by using the trained semantic similarity calculation model based on the problem pair.
The semantic similarity calculation model based on the problem pairs comprises an input layer, a coding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the coding layer is used for performing semantic analysis on the problems in the problem pairs, the importance of each word included in the problems and the structural characteristics of the learning problems are determined, the local interaction layer is used for performing semantic correlation analysis on two problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
After the preliminary screening of candidate questions is completed in step S103 from the content format, matching of questions to be matched needs to be performed semantically.
The embodiment of the application adopts a semantic similarity calculation model based on the problem pair to match the problem to be matched with the candidate problem.
One possible implementation is to combine each candidate question with the question to be matched to form a question pair, input the question pair into a question pair-based semantic similarity calculation model, and match the question to be matched with the candidate question according to the semantic similarity corresponding to each question pair.
It should be noted that the semantic similarity calculation model based on question pairs provided in the embodiments of the present application is a deep learning sentence pair model technique. With the continuous development of artificial intelligence and deep learning technology in recent years, more and more natural language processing tasks are processed by using a deep learning architecture, and a neural network language model, a recurrent neural network model, a sentence pair model and the like are common. In particular, the deep-learning sentence pair model determines the relationship between sentences by identifying semantic information encoded from the source sentence and the target sentence.
When the deep learning sentence-to-model technology is used, a set of training examples is given, each example is a triple (a source sentence, a target sentence, and a relation between the source sentence and the target sentence), and the probability of predicting the relation between any two sentences is learned by training a deep learning model (such as a recurrent neural network model).
It should be particularly noted that the word information input in the input layer in the semantic similarity calculation model based on the problem pairs provided in the embodiment of the present application includes a word set generated after a word is segmented for the problem, a part-of-speech set generated after part-of-speech of each word in the word set is recognized, and a synonym set generated after synonym recognition is performed on each word in the word set.
The coding layer performs semantic analysis on the words input by the input layer, and learns the context information of each word in the question and the structural feature information of the question from a plurality of words generated after the question is subjected to word segmentation processing. Specifically, the information of the words on the left and right sides of each word in the problem can be learned through a bidirectional cyclic neural network, and semantic relevancy calculation is performed on each word in the problem and other words respectively, so that the importance of each word included in the problem and the structural characteristics of the learning problem are determined.
After the coding layer determines the importance of each word included in the two questions in the question pair and the structural features of the learning question, the local interaction layer performs semantic relevance analysis on the two questions in the question pair on the basis. It can be understood that, when the local interaction layer performs the semantic relevance analysis, the more important words have a greater influence on the semantic relevance analysis. Specifically, semantic relevancy of each word in the first question and the second question is calculated in a weighted manner, the weight is the importance of the word, so that semantic relevancy of each word in the first question and the second question is obtained, and the semantic relevancy of each word in the second question and the first question can be obtained through similar steps.
In order to reduce the calculation amount of the subsequent semantic similarity, the aggregation layer adopts a maximum/minimum pooling mode to extract the features of the output of the local interaction layer, adopts a splicing mode to aggregate the extracted features, and calculates the semantic similarity of the problem pair according to the aggregated features.
In order to enable the coding layer to implement the above functions, one possible implementation is that the coding layer includes a bidirectional recurrent neural network layer, a first normalization layer and a stacked bidirectional self-attention layer.
The recurrent neural network is a recurrent neural network in which sequence data is input, recursion is performed in the direction of evolution of the sequence, and all nodes (recurrent units) are connected in a chain manner. The difference is that the recurrent neural network performs recursive calculations in one direction (typically the direction of evolution of the sequence), and the bidirectional recurrent neural network performs recursive calculations in two directions (typically the direction of evolution of the sequence and the opposite direction).
It can be understood that natural language as sequence data generally has stronger relevance, and the bidirectional recurrent neural network can combine data before and after the data in the sequence to encode the data, so that the rationality of data encoding is improved.
The first normalization layer can normalize the data, and facilitates the processing of subsequent data.
The stacked bidirectional self-attention layer determines the importance of each word in the question from the natural language through a self-attention mechanism, thereby providing weight for the semantic relevance between the words and the question to be calculated subsequently.
In order for the local interaction layer to implement the above functions, one possible implementation manner is that the local interaction layer includes a bidirectional multi-angle similarity analysis layer and a second normalization layer.
It should be noted that, in the bidirectional multi-angle similarity analysis layer provided in the embodiment of the present application, the semantic relevance between the word and the question is calculated from multiple angles, and after the calculation result is subjected to comprehensive processing, a comprehensive result of the semantic relevance calculation can be obtained.
A first possible angle is that a semantic relatedness of a word to each word included in the question may be calculated, with the result of the calculation being taken as the semantic relatedness between the word and the question.
The second possible angle is that the semantic relatedness of the word with the highest importance in the question can be calculated, and the calculation result is taken as the semantic relatedness between the word and the question.
A third possible angle is that the semantic features of the question are generated first based on the importance of a plurality of words included in the question, and then the semantic relevance of the words and the semantic features of the question is calculated as the semantic relevance between the words and the question.
It is to be understood that, in the process of generating the integrated result of the semantic relatedness calculation, the plurality of angles used may include any one of the possible angles described above, and may also include other possible angles, which is not limited in this embodiment of the present application.
To sum up, the problem matching method provided by the embodiment of the present application includes: and acquiring the problem to be matched, and matching the problem to be matched by using the dictionary tree. And if the matching fails, searching a plurality of candidate problems similar to the problem to be matched from a preset problem library, and matching the problem to be matched with the candidate problems by using a trained semantic similarity calculation model based on the problem pair. The semantic similarity calculation model based on the problem pairs comprises an input layer, a coding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the coding layer is used for performing semantic analysis on the problems in the problem pairs, the importance of each word included in the problems and the structural characteristics of the learning problems are determined, the local interaction layer is used for performing semantic correlation analysis on two problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs. Therefore, the semantic similarity of the problem pairs is calculated by using the trained semantic similarity calculation model based on the problem pairs, the semantics are used as reference factors for searching similar candidate problems, and the accuracy of determining the similar problems is improved.
In addition, in order to enable the problem matching method proposed in the embodiment of the present application to be applicable to different service scenarios and to be continuously updated over time, before the step S101, before the problem to be matched is obtained, the method further includes: and updating the dictionary tree, the preset problem library and the corresponding index library.
Based on the foregoing description, it can be known that the dictionary tree and the preset problem library in the embodiment of the present application are both storage forms of frequently asked questions, and in different service scenarios, the required frequently asked questions are different, and the frequently asked questions may change with the passage of time, so that before the problem to be matched is obtained, the dictionary tree and the preset problem library may be updated, thereby ensuring the accuracy of the problem matching method in the embodiment of the present application.
It should be noted that, after the dictionary tree and the preset problem library are updated, the corresponding index library also needs to be updated to realize the search function of the dictionary tree and the preset problem library.
The full-text retrieval technology is essentially a search engine technology, which refers to a technology for automatically collecting information from the internet and providing a query to a user after the information is collated. The search engine technology comprises the steps of web crawler, web index, web retrieval, search result ordering and the like. The search engine technology supports index dynamic update and near real-time search of web pages, and is currently applied to a large number of search scenes.
It will be appreciated that search engine technology enables near real-time retrieval of network information as information on the internet is continually being updated. When the full-text retrieval technology is used for retrieving the preset problem library, the near real-time search of the preset problem library can be ensured through steps of data crawler, data indexing, data retrieval, search result sequencing and the like under the condition that the preset problem library is continuously updated.
Based on the foregoing description, in step S102, matching the to-be-matched problem by using the dictionary tree can be completed quickly. And when the semantic similarity calculation model is adopted to match the problems to be matched, the calculation amount is large, and the efficiency is low.
In order to complete matching through the dictionary tree for as many problems to be matched as possible, a possible implementation manner is to perform homogenization processing on different expressions of the same problem, and fig. 4 is a schematic flow chart of the homogenization processing provided in the embodiment of the present application. As shown in fig. 4, based on the method flow shown in fig. 3, step S102, matching the to-be-matched questions by using a dictionary tree includes:
and step S11, removing the tone words in the question to be matched.
The term "qi" refers to a fictitious word representing the mood, which is often used at the end of a sentence or at a pause in a sentence to represent various moods, and the common mood terms include "o", "wool", "do", etc.
Step S12, unifying punctuation marks in the question to be matched.
Specifically, the punctuation mark may be set to be "" or "/", that is, the punctuation mark represents the removal of the effect of tone, and is only regarded as a sign of pause.
It should be noted that, because a plurality of similar problems of the problem to be matched need to be generated in step S13, considering that different users have different habits of using the tone words and punctuation marks, and the tone words and punctuation marks have no influence on the essential content of the problem, the tone words may be removed first, and the punctuation marks are uniformly converted into marks for pause, so as to facilitate subsequent processing.
And step S13, performing synonym replacement on the problem to be matched to generate a plurality of similar problems of the problem to be matched.
In step S14, each similarity problem is matched using the dictionary tree.
It can be understood that in order to match the question to be matched with the frequently asked question stored in the dictionary tree, the linguistic words of the frequently asked question are removed and punctuation marks are processed in a unified way when the dictionary tree is generated. When the dictionary tree is used, the same operation is carried out on the problems to be matched so as to improve the possibility of successful matching between the problems to be matched and frequently asked problems.
In addition, although there are many expression methods for a problem to be matched in natural language, if any one of these expression methods is stored in advance in a dictionary tree, matching of the problem to be matched can be performed using the dictionary tree. Therefore, synonym replacement can be performed on the problem to be matched to generate a plurality of similar problems which are different from the expression mode of the problem to be matched. And matching the similar problems by using the dictionary tree, and if any one similar problem is successfully matched, considering that the problem to be matched is successfully matched.
In the problem matching method provided in the embodiment of the present application, a trained semantic similarity calculation model based on a problem pair needs to be used to match a problem to be matched with a candidate model. In order to train the semantic similarity calculation model based on the problem pair, a possible implementation manner is that fig. 5 is a schematic flowchart of a training step of the semantic similarity calculation model based on the problem pair provided in the embodiment of the present application. As shown in fig. 5, the semantic similarity calculation model based on question pairs is trained by the following steps:
step S201, a reference problem pair and a reference semantic similarity corresponding to the reference problem pair are obtained.
Wherein the reference problem pair comprises a first reference problem and a second reference problem.
The reference question pair and the reference semantic similarity refer to training examples for training a semantic similarity calculation model based on the question pair.
Based on the foregoing description of the deep learning sentence on the model technology, it can be known that the reference problem pair and the reference semantic similarity form a triple in the training example, the first reference problem included in the reference problem pair can be used as a source sentence, the second reference problem can be used as a target sentence, and then the reference semantic similarity is the relationship between the source sentence and the target sentence.
It should be particularly noted that, in the embodiment of the present application, data in the frequently asked question list is used as a reference question pair and a reference semantic similarity. However, a situation may arise where the training examples are not balanced. Specifically, if the answers of the two questions are the same, the question pair formed by the two questions can be used as a training positive example, and the reference semantic similarity is set to be higher, for example, set to 1. If the answers of the two questions are different, the question pair formed by the two questions can be used as a training counterexample, and the reference speech similarity is set to be lower, for example, set to 0. It can be appreciated that the number of training positive examples that can be formed in the frequently asked question list is much smaller than the number of training negative examples, so that the training positive examples have too little effect on model training.
In order to balance the influence of the training positive example and the training negative example on the model training, a possible implementation manner is provided in the embodiment of the present application, and fig. 6 is a schematic flow chart of generating the training positive example and the training negative example provided in the embodiment of the present application, as shown in fig. 6, the method includes:
and step S21, performing data cleaning on the frequently asked question list.
Specifically, in the frequently asked question list, there are data with similar questions and similar corresponding answers, and the questions and the corresponding answers can be combined, and there are data with the same questions and different corresponding answers, and the answers can be unbound. In addition, English questions in the frequently asked question list can be deleted.
In step S22, a plurality of questions corresponding to each answer is determined.
Step S23, generating a training example according to the plurality of questions corresponding to each answer.
Based on the foregoing description, it can be seen that the number of training positive examples that can be formed in the frequently asked questions list is much smaller than the number of training negative examples, and therefore it is necessary to preferentially form the training positive examples.
The training examples are characterized in that the answers of two questions are the same, so that the training examples can be generated according to a plurality of questions corresponding to the answers.
In addition, considering that the number of questions corresponding to different answers is different, it may be preset that the number of training examples corresponding to each answer is similar.
And step S24, generating training counterexamples according to the questions corresponding to different answers.
It should be appreciated that to reduce the impact of counter-examples on model training, a more refined strategy is employed for the generation of training counter-examples.
Specifically, different answers may correspond to questions with similar descriptions, and in order to make the trained model distinguish the questions with similar descriptions, it is necessary to use this as a training counter example as much as possible.
Secondly, the number of training counter examples needs to be controlled to be close to the number of training positive examples.
In order to avoid the same problem occurring in different training counter cases, the problem already selected needs to be marked in the selection process of the training counter cases.
By the method for generating the training examples, the reference question pairs and the reference semantic similarity can be relatively practical from the frequently asked question list.
In practical use, a part of the training examples generated above may be used as a development data set, and a part may be used as a training data set. For example, 10% of the training samples may be used for development, and 90% of the training samples may be used for training, which is not limited in the embodiments of the present application.
Step S202, performing word segmentation processing on the first reference problem and the second reference problem respectively to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem.
Step S203, determining a part of speech corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set.
Step S204, determining synonyms corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference synonym set corresponding to the first reference word set and a second reference synonym set corresponding to the second reference word set.
Step S205, inputting the first reference word set, the second reference word set, the first reference part of speech set, the second reference part of speech set, the first reference synonym set and the second reference synonym set into an encoding layer of a semantic similarity calculation model based on the problem pairs.
Based on the foregoing description of the semantic similarity calculation model based on the question pairs, it can be known that the word information input by the input layer includes a word set, a part of speech set, and a synonym set. Correspondingly, in the model training process, the input layer needs to perform word segmentation, part-of-speech recognition and synonym recognition on the first reference problem and the second reference problem in the input reference problem pair respectively.
It should be noted that the input first reference question and the input second reference question are processed by the input layer to obtain a first reference word set, a first reference part-of-speech set and a first synonym set corresponding to the first reference question, a second reference word set, a second reference part-of-speech set and a second reference synonym set corresponding to the second reference question.
And step S206, training parameters of the semantic similarity calculation model based on the question pairs according to the output of the semantic similarity calculation model based on the question pairs and the reference semantic similarity.
It should be noted that, based on the foregoing description, the input layer functions to convert the question in the question pair into a word set, a part of speech set, and a synonym set suitable for the model processing. Thus, the parameters of the input layer remain unchanged during the model training process.
The training of the model parameters mainly aims at the parameters in the coding layer, the local interaction layer, the aggregation layer and the output layer.
And step S207, finishing training of the semantic similarity calculation model based on the question pair when the accuracy of the semantic similarity calculation model based on the question pair is greater than a preset threshold.
When the accuracy of the semantic similarity calculation model is larger than a preset threshold value, the trained model can be used for meeting the actual requirement, and the trained semantic similarity calculation model based on the problem pair can be obtained.
Therefore, training of the semantic similarity calculation model based on the problem pairs is achieved.
In order to more clearly illustrate the matching method of the problems provided by the embodiments of the present application, the following description is made.
Fig. 7 is a flowchart illustrating a problem matching method according to an embodiment of the present application. FIG. 8 is a schematic diagram of the initial load of FIG. 7. As shown in fig. 7 and 8, first, initialization loading is performed, a dictionary tree based on frequently asked questions, a preset question bank with full-text retrieval technology, and reference question pairs and reference semantic similarities for training a semantic similarity calculation model based on question pairs are established offline according to a pre-stored frequently asked question list, and training of the semantic similarity calculation model based on question pairs is completed.
When the frequently asked question list is added, deleted and updated, the dictionary tree, the preset question bank and the corresponding index bank are triggered based on the message event, and the index bank of the dictionary tree and the inverted index bank of the preset question bank are mainly updated.
After the problem to be matched is obtained, the problem to be matched is preprocessed, the tone words are removed, punctuation marks are unified, synonym replacement is carried out, and therefore a plurality of similar problems are generated. And respectively matching each similar problem by using the dictionary tree, and completing the matching of the problems to be matched if the matching is successful.
And if the matching fails, searching a plurality of candidate problems similar to the problem to be matched from the preset problem library by using a full-text retrieval technology of the preset problem library. And matching the problem to be matched with the candidate problem by using the trained semantic similarity calculation model based on the problem pair, and completing the matching of the problem to be matched so as to determine the frequently asked problem.
Fig. 9 is a schematic structural diagram of a semantic similarity calculation model based on question pairs according to an embodiment of the present application. As shown in fig. 9, the question to be matched and the candidate question are used as one question to input the model, and the input layer processes the question to be matched and the candidate question respectively to generate a corresponding word set, a part of speech set and a synonym set. And inputting the information of the plurality of sets into a coding layer, extracting the relation between each word and the preceding and following words by a bidirectional cyclic neural network layer in the coding layer, and determining the importance degree of each word in the problem by a bidirectional self-attention layer in the coding layer. In a local interaction layer, semantic relevance calculation is carried out on words in the problem to be matched and the candidate problem, semantic relevance calculation is carried out on the words in the candidate problem and the problem to be matched, and a comprehensive result of the semantic relevance calculation is obtained after multi-angle analysis.
And after the aggregation layer performs feature extraction and aggregation on the output of the local interaction layer, the output layer outputs the semantic similarity between each candidate problem and the problem to be matched, and selects the candidate problem with the highest semantic similarity as the matching result of the problem to be matched.
Based on the above description of the question-answering system, it can be known that multiple questions with similar semantics can be replied to using the same answer. Therefore, the problem matching method provided by the embodiment of the present application can be used for determining frequently asked problems with the same semantics as the problems to be matched, and therefore, the problem matching method provided by the embodiment of the present application can be used for replying the problems. Specifically, fig. 10 is a schematic flow chart of a problem recovery method according to an embodiment of the present application, and as shown in fig. 10, the method includes:
step S301, a question to be replied is acquired.
The questions to be replied are questions which are provided by the user through the man-machine interaction interface in the question answering system, and the user needs to reply by the machine.
Step S302, using the question matching method described in the foregoing embodiment, determines candidate questions matched with the question to be replied.
In the question-answering system based on the frequently asked question list, the frequently asked questions with the same semantic meaning as the questions proposed by the user can be determined as candidate questions from the frequently asked questions stored in advance by adopting the question matching method.
Step S303, reply is performed using the candidate answer corresponding to the candidate question.
It can be understood that, since the candidate question has the same semantic meaning as the question provided by the user, the corresponding answer is also the same, and the candidate answer corresponding to the candidate question can be used for replying.
Therefore, the answer of the frequently asked question stored in advance is used as the answer of the question provided by the user to reply in a question matching mode.
In order to implement the above embodiments, the embodiments of the present application further provide a problem matching apparatus. Fig. 11 is a schematic structural diagram of a matching apparatus for a problem according to an embodiment of the present application, as shown in fig. 11, the apparatus includes: a first obtaining module 410, a first matching module 420, a retrieving module 430, and a second matching module 440.
A first obtaining module 410, configured to obtain a question to be matched.
Wherein the question to be matched is a question posed by the user.
The first matching module 420 is configured to match the to-be-matched problem by using a dictionary tree.
The dictionary tree (trie tree), also called word search tree, is a tree structure, the root node does not contain characters, each node except the root node only contains one character, the characters passing through the path from the root node to a certain node are connected, and the characters contained in all the child nodes of each node are different for the character string corresponding to the node.
And the retrieving module 430 is configured to retrieve a plurality of candidate questions similar to the question to be matched from the preset question bank when the first matching module fails to match.
When the preset question bank is searched, only the frequently asked questions with the same keywords as the questions to be matched need to be searched, and the content forms do not need to be completely consistent. It can be understood that the larger the number of the same keywords of the frequently asked questions in the preset question bank and the questions to be matched, the higher the similarity of the frequently asked questions and the questions to be matched in the content form, so that the search results are inversely sorted according to the similarity in the embodiment of the present application, and then the first N frequently asked questions are taken as candidate questions, where N is a positive integer.
And the second matching module 440 is configured to match the problem to be matched with the candidate problem by using the trained problem pair-based semantic similarity calculation model.
The semantic similarity calculation model based on the problem pairs comprises an input layer, a coding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the coding layer is used for performing semantic analysis on the problems in the problem pairs, the importance of each word included in the problems and the structural characteristics of the learning problems are determined, the local interaction layer is used for performing semantic relevance analysis on the problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
Further, in order to enable the problem matching method provided in the embodiment of the present application to be applicable to different service scenarios and continuously updated over time, a possible implementation manner is that the apparatus further includes: the updating module 450 is configured to update the dictionary tree, the preset problem database, and the corresponding index database.
It should be noted that, after the dictionary tree and the preset problem library are updated, the corresponding index library also needs to be updated to realize the search function of the dictionary tree and the preset problem library.
Further, in order to complete matching through the dictionary tree for as many problems to be matched as possible, a possible implementation manner is that fig. 12 is a schematic structural diagram of the first matching module provided in the embodiment of the present application. As shown in fig. 12, based on the apparatus structure shown in fig. 11, the first matching module 420 includes:
the removing submodule 421 is configured to remove the linguistic words in the question to be matched.
The term "qi" refers to a fictitious word representing the mood, which is often used at the end of a sentence or at a pause in a sentence to represent various moods, and the common mood terms include "o", "wool", "do", etc.
And the unifying submodule 422 is used for unifying punctuations in the problem to be matched.
Specifically, the punctuation mark may be set to be "" or "/", that is, the punctuation mark represents the removal of the effect of tone, and is only regarded as a sign of pause.
And the replacing sub-module 423 is used for performing synonym replacement on the problem to be matched so as to generate a plurality of similar problems of the problem to be matched.
A first matching sub-module 424 for matching each similarity problem separately using a dictionary tree.
Further, in order to match the problem to be matched with the candidate problem by using the semantic similarity calculation model based on the problem pair, a possible implementation manner is that fig. 13 is a schematic structural diagram of the second matching module provided in the embodiment of the present application. As shown in fig. 13, based on the device structure shown in fig. 11, the second matching module 440 includes:
and a pair-grouping submodule 441, configured to group each candidate question with a question to be matched into a question pair.
An input sub-module 442 for inputting the question pairs into the semantic similarity calculation model based on the question pairs.
And the second matching sub-module 443 is configured to match the question to be matched with the candidate question according to the semantic similarity corresponding to each question pair.
Further, in order to train the semantic similarity computation model based on the question pair, a possible implementation manner is that fig. 14 is another schematic structural diagram of a matching device of the question proposed in the embodiment of the present application, as shown in fig. 14, based on the device structure shown in fig. 11, the device further includes:
the second obtaining module 510 is configured to obtain the reference question pair and the semantic similarity of reference corresponding to the reference question pair.
Wherein the reference problem pair comprises a first reference problem and a second reference problem.
The reference question pair and the reference semantic similarity refer to training examples for training a semantic similarity calculation model based on the question pair.
The processing module 520 is configured to perform word segmentation on the first reference problem and the second reference problem, respectively, to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem.
A first determining module 530, configured to determine a part of speech corresponding to each reference word in the first reference word set and the second reference word set, so as to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set.
A second determining module 540, configured to determine a synonym corresponding to each reference word in the first reference word set and the second reference word set, so as to generate a first reference synonym set corresponding to the first reference word set and a second reference synonym set corresponding to the second reference word set.
An input module 550, configured to input the first reference word set, the second reference word set, the first reference part-of-speech set, the second reference part-of-speech set, the first reference synonym set, and the second reference synonym set into an encoding layer of a semantic similarity calculation model based on the problem pairs.
And the training module 560 is configured to train parameters of the semantic similarity calculation model based on the question pairs according to the output of the semantic similarity calculation model based on the question pairs and the reference semantic similarity.
The training of the model parameters is mainly to train parameters in a coding layer, a local interaction layer, an aggregation layer and an output layer, and the parameters of the input layer are kept unchanged.
A completion module 570, configured to complete training of the semantic similarity calculation model based on the question pair when the accuracy of the semantic similarity calculation model based on the question pair is greater than a preset threshold.
When the accuracy of the semantic similarity calculation model is larger than a preset threshold value, the trained model can be used for meeting the actual requirement, and the trained semantic similarity calculation model based on the problem pair can be obtained.
Further, in order to enable the coding layer to perform semantic analysis on the questions in the question pairs, determine the importance of each word included in the questions and the structural features of the learning questions, one possible implementation manner is that the coding layer includes a bidirectional recurrent neural network layer, a first normalization layer and a stacked bidirectional self-attention layer.
Further, in order to enable the local interaction layer to perform semantic relatedness analysis on two questions in the question pair, the local interaction layer comprises a bidirectional multi-angle similarity analysis layer and a second normalization layer.
It should be noted that the foregoing description of the embodiment of the problem matching method is also applicable to the problem matching apparatus in the embodiment of the present application, and is not repeated herein.
To sum up, the matching device of the problems provided by the embodiment of the application acquires the problems to be matched when matching the problems, and matches the problems to be matched by using the dictionary tree. And if the matching fails, searching a plurality of candidate problems similar to the problem to be matched from a preset problem library, and matching the problem to be matched with the candidate problems by using a trained semantic similarity calculation model based on the problem pair. The semantic similarity calculation model based on the problem pairs comprises an input layer, a coding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the coding layer is used for performing semantic analysis on the problems in the problem pairs, the importance of each word included in the problems and the structural characteristics of the learning problems are determined, the local interaction layer is used for performing semantic correlation analysis on two problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs. Therefore, the semantic similarity of the problem pairs is calculated by using the trained semantic similarity calculation model based on the problem pairs, the semantics are used as reference factors for searching similar candidate problems, and the accuracy of determining the similar problems is improved.
In order to implement the above embodiments, the present application further provides a device for recovering a problem, and fig. 15 is a schematic structural diagram of the device for recovering a problem provided by the present application. As shown in fig. 15, the apparatus includes: a third obtaining module 610, a third determining module 620 and a replying module 630.
A third obtaining module 610, configured to obtain a question to be replied.
The questions to be replied are questions which are provided by the user through the man-machine interaction interface in the question answering system, and the user needs to reply by the machine.
A third determining module 620, configured to determine candidate questions that match the question to be replied to, using the matching method of the question as in the foregoing embodiments.
Specifically, from the pre-stored frequently asked questions, frequently asked questions having the same semantic meaning as the questions posed by the user may be determined as candidate questions.
And a replying module 630, configured to reply by using the candidate answer corresponding to the candidate question.
It can be understood that, since the candidate question has the same semantic meaning as the question provided by the user, the corresponding answer is also the same, and the candidate answer corresponding to the candidate question can be used for replying.
It should be noted that the foregoing description of the embodiment of the method for recovering from a problem is also applicable to a device for recovering from a problem in the embodiment of the present application, and is not repeated here.
Therefore, the answer of the frequently asked question stored in advance is used as the answer of the question provided by the user to reply in a question matching mode.
In order to implement the foregoing embodiment, an embodiment of the present application further provides a question-answering system, fig. 16 is a schematic structural diagram of the question-answering system provided in the embodiment of the present application, and as shown in fig. 16, the question-answering system includes:
and the question-answering interface 710 is used for receiving input contents of the user and displaying the generated reply contents.
And the distribution agent 720 is used for distributing the input content of the user to the corresponding reply device according to the type of the input content of the user. The replying device comprises a replying device 730 of the question, a natural language understanding device and a natural language processing device.
The question replying device 730 is configured to receive the question to be replied sent by the distribution agent 720, and determine a corresponding answer from the preset question library 740.
In addition, the question answering system further includes a data message middleware and an automatic program, the automatic program updates the preset question bank at intervals of preset time, specifically, deletes and adds frequently asked questions in the preset question bank and a corresponding index bank, and the data message middleware sends messages for updating the preset question bank to the question replying device 730.
It should be noted that the foregoing description of the embodiment of the device for replying to questions is also applicable to the question answering system in the embodiment of the present application, and is not repeated here.
Therefore, the input content of the user is received through the question-answering interface of the question-answering system, the distribution agent transmits the questions proposed by the user to the question replying device, the question replying device generates answers to the questions to reply, and the answers are displayed through the question-answering interface.
In order to implement the foregoing embodiment, an embodiment of the present application further provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the computer program, performing the steps of:
and step S101, acquiring the problem to be matched.
Based on the above description of the question-answering system, it can be known that the question to be matched is a question posed by the user.
The method for acquiring the problem proposed by the user can be various, and can be realized through a human-computer interaction interface, and the problem proposed by the user through the human-computer interaction interface can be realized through various input means such as keyboard input, voice input and the like.
And step S102, matching the problems to be matched by using the dictionary tree.
It should be noted that, in the embodiment of the present application, the problem is matched in two aspects of character and semantic, specifically, the dictionary tree is used for matching the character, and the semantic similarity calculation model based on the problem pair is used for semantic matching.
The dictionary tree (trie tree), also called word search tree, is a tree structure, the root node does not contain characters, each node except the root node only contains one character, the characters passing through the path from the root node to a certain node are connected, and the characters contained in all the child nodes of each node are different for the character string corresponding to the node.
It can be understood that, in the embodiment of the present application, a plurality of frequently asked questions are stored in advance, and each frequently asked question is filled into a dictionary tree as one character string, so that a node in the dictionary tree may correspond to the frequently asked question. Splitting a character string corresponding to the problem to be matched into a plurality of characters, sequentially matching the characters by using a dictionary tree, and if a node matched with the character string corresponding to the problem to be matched can be found in the dictionary tree, determining that the problem to be matched and the frequently asked question corresponding to the node are matched.
It should be noted that, if matching of the problem to be matched can be completed by using the dictionary tree, character matching of the problem to be matched is completed. That is, the question posed by the user is the same as a candidate question in content, and then in the subsequent reply process, the answer corresponding to the candidate question can be directly used for answering the question posed by the user.
Step S103, if the matching fails, a plurality of candidate questions similar to the question to be matched are searched from a preset question bank.
It should be noted that, if the matching of the question to be matched is not completed in step S102, it is indicated that there is no candidate question in the pre-stored frequently asked questions that is the same as the question posed by the user, and the question to be matched needs to be matched in a semantic analysis manner.
It can be understood that there are a large number of frequently asked questions in the preset question bank, and if each frequently asked question is input into the semantic similarity calculation model together with the question to be matched, the calculation amount is too large, and the efficiency is very low.
In order to reduce the calculation amount in the subsequent process, the frequently asked questions in the preset question bank can be primarily screened. Specifically, a plurality of candidate questions similar to the question to be matched are retrieved from a preset question library, and then each candidate question and the question to be matched are input into a semantic similarity calculation model together, so that the candidate question matched with the question to be matched is determined.
Different from the aforementioned matching by using the dictionary tree, when the preset problem library is searched, only the frequently asked problems having the same keywords as the problems to be matched need to be searched, and the content form does not need to be completely consistent. It can be understood that the larger the number of the same keywords of the frequently asked questions in the preset question bank and the questions to be matched, the higher the similarity of the frequently asked questions and the questions to be matched in the content form, so that the search results are inversely sorted according to the similarity in the embodiment of the present application, and then the first N frequently asked questions are taken as candidate questions, where N is a positive integer.
And step S104, matching the problem to be matched with the candidate problem by using the trained semantic similarity calculation model based on the problem pair.
The semantic similarity calculation model based on the problem pairs comprises an input layer, a coding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the coding layer is used for performing semantic analysis on the problems in the problem pairs, the importance of each word included in the problems and the structural characteristics of the learning problems are determined, the local interaction layer is used for performing semantic correlation analysis on two problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
After the preliminary screening of candidate questions is completed in step S103 from the content format, matching of questions to be matched needs to be performed semantically.
The embodiment of the application adopts a semantic similarity calculation model based on the problem pair to match the problem to be matched with the candidate problem.
One possible implementation is to combine each candidate question with the question to be matched to form a question pair, input the question pair into a question pair-based semantic similarity calculation model, and match the question to be matched with the candidate question according to the semantic similarity corresponding to each question pair.
It should be noted that the semantic similarity calculation model based on question pairs provided in the embodiments of the present application is a deep learning sentence pair model technique. With the continuous development of artificial intelligence and deep learning technology in recent years, more and more natural language processing tasks are processed by using a deep learning architecture, and a neural network language model, a recurrent neural network model, a sentence pair model and the like are common. In particular, the deep-learning sentence pair model determines the relationship between sentences by identifying semantic information encoded from the source sentence and the target sentence.
When the deep learning sentence-to-model technology is used, a set of training examples is given, each example is a triple (a source sentence, a target sentence, and a relation between the source sentence and the target sentence), and the probability of predicting the relation between any two sentences is learned by training a deep learning model (such as a recurrent neural network model).
It should be particularly noted that the word information input in the input layer in the semantic similarity calculation model based on the problem pairs provided in the embodiment of the present application includes a word set generated after a word is segmented for the problem, a part-of-speech set generated after part-of-speech of each word in the word set is recognized, and a synonym set generated after synonym recognition is performed on each word in the word set.
The coding layer performs semantic analysis on the words input by the input layer, and learns the context information of each word in the question and the structural feature information of the question from a plurality of words generated after the question is subjected to word segmentation processing. Specifically, the information of the words on the left and right sides of each word in the problem can be learned through a bidirectional cyclic neural network, and semantic relevancy calculation is performed on each word in the problem and other words respectively, so that the importance of each word included in the problem and the structural characteristics of the learning problem are determined.
After the coding layer determines the importance of each word included in the two questions in the question pair and the structural features of the learning question, the local interaction layer performs semantic relevance analysis on the two questions in the question pair on the basis. It can be understood that, when the local interaction layer performs the semantic relevance analysis, the more important words have a greater influence on the semantic relevance analysis. Specifically, semantic relevancy of each word in the first question and the second question is calculated in a weighted manner, the weight is the importance of the word, so that semantic relevancy of each word in the first question and the second question is obtained, and the semantic relevancy of each word in the second question and the first question can be obtained through similar steps.
In order to reduce the calculation amount of the subsequent semantic similarity, the aggregation layer adopts a maximum/minimum pooling mode to extract the features of the output of the local interaction layer, adopts a splicing mode to aggregate the extracted features, and calculates the semantic similarity of the problem pair according to the aggregated features.
In order to enable the coding layer to implement the above functions, one possible implementation is that the coding layer includes a bidirectional recurrent neural network layer, a first normalization layer and a stacked bidirectional self-attention layer.
The recurrent neural network is a recurrent neural network in which sequence data is input, recursion is performed in the direction of evolution of the sequence, and all nodes (recurrent units) are connected in a chain manner. The difference is that the recurrent neural network performs recursive calculations in one direction (typically the direction of evolution of the sequence), and the bidirectional recurrent neural network performs recursive calculations in two directions (typically the direction of evolution of the sequence and the opposite direction).
It can be understood that natural language as sequence data generally has stronger relevance, and the bidirectional recurrent neural network can combine data before and after the data in the sequence to encode the data, so that the rationality of data encoding is improved.
The first normalization layer can normalize the data, and facilitates the processing of subsequent data.
The stacked bidirectional self-attention layer determines the importance of each word in the question from the natural language through a self-attention mechanism, thereby providing weight for the semantic relevance between the words and the question to be calculated subsequently.
In order for the local interaction layer to implement the above functions, one possible implementation manner is that the local interaction layer includes a bidirectional multi-angle similarity analysis layer and a second normalization layer.
It should be noted that, in the bidirectional multi-angle similarity analysis layer provided in the embodiment of the present application, the semantic relevance between the word and the question is calculated from multiple angles, and after the calculation result is subjected to comprehensive processing, a comprehensive result of the semantic relevance calculation can be obtained.
A first possible angle is that a semantic relatedness of a word to each word included in the question may be calculated, with the result of the calculation being taken as the semantic relatedness between the word and the question.
The second possible angle is that the semantic relatedness of the word with the highest importance in the question can be calculated, and the calculation result is taken as the semantic relatedness between the word and the question.
A third possible angle is that the semantic features of the question are generated first based on the importance of a plurality of words included in the question, and then the semantic relevance of the words and the semantic features of the question is calculated as the semantic relevance between the words and the question.
It is to be understood that, in the process of generating the integrated result of the semantic relatedness calculation, the plurality of angles used may include any one of the possible angles described above, and may also include other possible angles, which is not limited in this embodiment of the present application.
To sum up, the electronic device provided by the embodiment of the application acquires the problem to be matched when the problem is matched, and matches the problem to be matched by using the dictionary tree. And if the matching fails, searching a plurality of candidate problems similar to the problem to be matched from a preset problem library, and matching the problem to be matched with the candidate problems by using a trained semantic similarity calculation model based on the problem pair. The semantic similarity calculation model based on the problem pairs comprises an input layer, a coding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the coding layer is used for performing semantic analysis on the problems in the problem pairs, the importance of each word included in the problems and the structural characteristics of the learning problems are determined, the local interaction layer is used for performing semantic correlation analysis on two problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs. Therefore, the semantic similarity of the problem pairs is calculated by using the trained semantic similarity calculation model based on the problem pairs, the semantics are used as reference factors for searching similar candidate problems, and the accuracy of determining the similar problems is improved.
In addition, in order to enable the electronic device proposed in the embodiment of the present application to be applicable to different service scenarios and to be continuously updated over time, before acquiring the problem to be matched in step S101, the method further includes: and updating the dictionary tree, the preset problem library and the corresponding index library.
Based on the foregoing description, it can be known that the dictionary tree and the preset problem library in the embodiment of the present application are both storage forms of frequently asked questions, and in different service scenarios, the required frequently asked questions are different, and the frequently asked questions may change with the passage of time, so that before the problem to be matched is obtained, the dictionary tree and the preset problem library may be updated, thereby ensuring the accuracy of the problem matching method in the embodiment of the present application.
It should be noted that, after the dictionary tree and the preset problem library are updated, the corresponding index library also needs to be updated to realize the search function of the dictionary tree and the preset problem library.
The full-text retrieval technology is essentially a search engine technology, which refers to a technology for automatically collecting information from the internet and providing a query to a user after the information is collated. The search engine technology comprises the steps of web crawler, web index, web retrieval, search result ordering and the like. The search engine technology supports index dynamic update and near real-time search of web pages, and is currently applied to a large number of search scenes.
It will be appreciated that search engine technology enables near real-time retrieval of network information as information on the internet is continually being updated. When the full-text retrieval technology is used for retrieving the preset problem library, the near real-time search of the preset problem library can be ensured through steps of data crawler, data indexing, data retrieval, search result sequencing and the like under the condition that the preset problem library is continuously updated.
Based on the foregoing description, in step S102, matching the to-be-matched problem by using the dictionary tree can be completed quickly. And when the semantic similarity calculation model is adopted to match the problems to be matched, the calculation amount is large, and the efficiency is low.
In order to complete matching through the dictionary tree for as many questions to be matched as possible, one possible implementation manner is to perform normalization processing on different expressions of the same question, and step S102 is to match the questions to be matched with the dictionary tree, including:
and step S11, removing the tone words in the question to be matched.
The term "qi" refers to a fictitious word representing the mood, which is often used at the end of a sentence or at a pause in a sentence to represent various moods, and the common mood terms include "o", "wool", "do", etc.
Step S12, unifying punctuation marks in the question to be matched.
Specifically, the punctuation mark may be set to be "" or "/", that is, the punctuation mark represents the removal of the effect of tone, and is only regarded as a sign of pause.
It should be noted that, because a plurality of similar problems of the problem to be matched need to be generated in step S13, considering that different users have different habits of using the tone words and punctuation marks, and the tone words and punctuation marks have no influence on the essential content of the problem, the tone words may be removed first, and the punctuation marks are uniformly converted into marks for pause, so as to facilitate subsequent processing.
And step S13, performing synonym replacement on the problem to be matched to generate a plurality of similar problems of the problem to be matched.
In step S14, each similarity problem is matched using the dictionary tree.
It can be understood that in order to match the question to be matched with the frequently asked question stored in the dictionary tree, the linguistic words of the frequently asked question are removed and punctuation marks are processed in a unified way when the dictionary tree is generated. When the dictionary tree is used, the same operation is carried out on the problems to be matched so as to improve the possibility of successful matching between the problems to be matched and frequently asked problems.
In addition, although there are many expression methods for a problem to be matched in natural language, if any one of these expression methods is stored in advance in a dictionary tree, matching of the problem to be matched can be performed using the dictionary tree. Therefore, synonym replacement can be performed on the problem to be matched to generate a plurality of similar problems which are different from the expression mode of the problem to be matched. And matching the similar problems by using the dictionary tree, and if any one similar problem is successfully matched, considering that the problem to be matched is successfully matched.
In the electronic device provided in the embodiment of the present application, a trained problem-pair-based semantic similarity calculation model needs to be used to match a problem to be matched with a candidate model. In order to train the semantic similarity calculation model based on the question pair, one possible implementation manner is that, when the semantic similarity calculation model based on the question pair is trained, the electronic device performs the following steps:
step S201, a reference problem pair and a reference semantic similarity corresponding to the reference problem pair are obtained.
Wherein the reference problem pair comprises a first reference problem and a second reference problem.
The reference question pair and the reference semantic similarity refer to training examples for training a semantic similarity calculation model based on the question pair.
Based on the foregoing description of the deep learning sentence on the model technology, it can be known that the reference problem pair and the reference semantic similarity form a triple in the training example, the first reference problem included in the reference problem pair can be used as a source sentence, the second reference problem can be used as a target sentence, and then the reference semantic similarity is the relationship between the source sentence and the target sentence.
It should be particularly noted that, in the embodiment of the present application, data in the frequently asked question list is used as a reference question pair and a reference semantic similarity. However, a situation may arise where the training examples are not balanced. Specifically, if the answers of the two questions are the same, the question pair formed by the two questions can be used as a training positive example, and the reference semantic similarity is set to be higher, for example, set to 1. If the answers of the two questions are different, the question pair formed by the two questions can be used as a training counterexample, and the reference speech similarity is set to be lower, for example, set to 0. It can be appreciated that the number of training positive examples that can be formed in the frequently asked question list is much smaller than the number of training negative examples, so that the training positive examples have too little effect on model training.
In order to balance the influence of the training positive examples and the training negative examples on model training, the embodiment of the application provides a possible implementation manner, and the method comprises the following steps:
and step S21, performing data cleaning on the frequently asked question list.
Specifically, in the frequently asked question list, there are data with similar questions and similar corresponding answers, and the questions and the corresponding answers can be combined, and there are data with the same questions and different corresponding answers, and the answers can be unbound. In addition, English questions in the frequently asked question list can be deleted.
In step S22, a plurality of questions corresponding to each answer is determined.
Step S23, generating a training example according to the plurality of questions corresponding to each answer.
Based on the foregoing description, it can be seen that the number of training positive examples that can be formed in the frequently asked questions list is much smaller than the number of training negative examples, and therefore it is necessary to preferentially form the training positive examples.
The training examples are characterized in that the answers of two questions are the same, so that the training examples can be generated according to a plurality of questions corresponding to the answers.
In addition, considering that the number of questions corresponding to different answers is different, it may be preset that the number of training examples corresponding to each answer is similar.
And step S24, generating training counterexamples according to the questions corresponding to different answers.
It should be appreciated that to reduce the impact of counter-examples on model training, a more refined strategy is employed for the generation of training counter-examples.
Specifically, different answers may correspond to questions with similar descriptions, and in order to make the trained model distinguish the questions with similar descriptions, it is necessary to use this as a training counter example as much as possible.
Secondly, the number of training counter examples needs to be controlled to be close to the number of training positive examples.
In order to avoid the same problem occurring in different training counter cases, the problem already selected needs to be marked in the selection process of the training counter cases.
By the method for generating the training examples, the reference question pairs and the reference semantic similarity can be relatively practical from the frequently asked question list.
In practical use, a part of the training examples generated above may be used as a development data set, and a part may be used as a training data set. For example, 10% of the training samples may be used for development, and 90% of the training samples may be used for training, which is not limited in the embodiments of the present application.
Step S202, performing word segmentation processing on the first reference problem and the second reference problem respectively to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem.
Step S203, determining a part of speech corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set.
Step S204, determining synonyms corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference synonym set corresponding to the first reference word set and a second reference synonym set corresponding to the second reference word set.
Step S205, inputting the first reference word set, the second reference word set, the first reference part of speech set, the second reference part of speech set, the first reference synonym set and the second reference synonym set into an encoding layer of a semantic similarity calculation model based on the problem pairs.
Based on the foregoing description of the semantic similarity calculation model based on the question pairs, it can be known that the word information input by the input layer includes a word set, a part of speech set, and a synonym set. Correspondingly, in the model training process, the input layer needs to perform word segmentation, part-of-speech recognition and synonym recognition on the first reference problem and the second reference problem in the input reference problem pair respectively.
It should be noted that the input first reference question and the input second reference question are processed by the input layer to obtain a first reference word set, a first reference part-of-speech set and a first synonym set corresponding to the first reference question, a second reference word set, a second reference part-of-speech set and a second reference synonym set corresponding to the second reference question.
And step S206, training parameters of the semantic similarity calculation model based on the question pairs according to the output of the semantic similarity calculation model based on the question pairs and the reference semantic similarity.
It should be noted that, based on the foregoing description, the input layer functions to convert the question in the question pair into a word set, a part of speech set, and a synonym set suitable for the model processing. Thus, the parameters of the input layer remain unchanged during the model training process.
The training of the model parameters mainly aims at the parameters in the coding layer, the local interaction layer, the aggregation layer and the output layer.
And step S207, finishing training of the semantic similarity calculation model based on the question pair when the accuracy of the semantic similarity calculation model based on the question pair is greater than a preset threshold.
When the accuracy of the semantic similarity calculation model is larger than a preset threshold value, the trained model can be used for meeting the actual requirement, and the trained semantic similarity calculation model based on the problem pair can be obtained.
Therefore, training of the semantic similarity calculation model based on the problem pairs is achieved.
In order to implement the foregoing embodiments, an electronic device is further provided in an embodiment of the present application, and fig. 17 is a schematic diagram of the electronic device provided in the embodiment of the present application. As shown in fig. 17, includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the computer program, performing the steps of:
step S301, a question to be replied is acquired.
The questions to be replied are questions which are provided by the user through the man-machine interaction interface in the question answering system, and the user needs to reply by the machine.
Step S302, using the matching method of the questions as described above, determines candidate questions matched with the question to be replied.
In the question-answering system based on the frequently asked question list, the frequently asked questions with the same semantic meaning as the questions proposed by the user can be determined as candidate questions from the frequently asked questions stored in advance by adopting the question matching method.
Step S303, reply is performed using the candidate answer corresponding to the candidate question.
It can be understood that, since the candidate question has the same semantic meaning as the question provided by the user, the corresponding answer is also the same, and the candidate answer corresponding to the candidate question can be used for replying.
Therefore, the answer of the frequently asked question stored in advance is used as the answer of the question provided by the user to reply in a question matching mode.
In order to achieve the above embodiments, the present application also proposes a computer-readable storage medium in which a computer program is stored, which, when run on a computer, causes the computer to execute the matching method of the problem in the foregoing embodiments.
In order to implement the foregoing embodiments, the present application also proposes a computer-readable storage medium in which a computer program is stored, which, when run on a computer, causes the computer to execute the method for replying to a question in the foregoing embodiments.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.
Claims (27)
1. A method for matching questions, comprising:
acquiring a problem to be matched;
matching the problems to be matched by using a dictionary tree;
if the matching fails, searching a plurality of candidate questions similar to the question to be matched from a preset question bank;
matching the problem to be matched with the candidate problem by using a trained semantic similarity calculation model based on a problem pair;
the semantic similarity calculation model based on the problem pairs comprises an input layer, an encoding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the encoding layer is used for performing semantic analysis on the problems in the problem pairs, determining the importance of each word included in the problems and learning the structural features of the problems, the local interaction layer is used for performing semantic relevance analysis on two of the problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
2. The method of claim 1, further comprising, prior to said obtaining a question to be matched:
and updating the dictionary tree, the preset problem library and the corresponding index library.
3. The method of claim 1, wherein the matching the question to be matched using a dictionary tree comprises:
removing the tone words in the question to be matched;
unifying punctuation marks in the problem to be matched;
performing synonym replacement on the problem to be matched to generate a plurality of similar problems of the problem to be matched;
and respectively matching each similar problem by using the dictionary tree.
4. The method of claim 1, wherein matching the question to be matched with the candidate question using a trained question-pair-based semantic similarity computation model comprises:
forming a problem pair by combining each candidate problem with the problem to be matched;
inputting the question pair into the question pair-based semantic similarity calculation model;
and matching the question to be matched with the candidate question according to the semantic similarity corresponding to each question pair.
5. The method according to any one of claims 1-4, wherein the question-pair-based semantic similarity calculation model is trained by:
acquiring a reference problem pair and a reference semantic similarity corresponding to the reference problem pair; wherein the reference problem pair comprises a first reference problem and a second reference problem;
performing word segmentation processing on the first reference problem and the second reference problem respectively to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem;
determining a part of speech corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set;
determining synonyms corresponding to each of the reference words in the first and second sets of reference words to generate a first set of reference synonyms corresponding to the first set of reference words and a second set of reference synonyms corresponding to the second set of reference words;
inputting the first set of reference words, the second set of reference words, the first set of reference parts of speech, the second set of reference parts of speech, the first set of reference synonyms and the second set of reference synonyms into the coding layer of the problem pair-based semantic similarity calculation model;
training parameters of the semantic similarity calculation model based on the question pairs according to the output of the output layer of the semantic similarity calculation model based on the question pairs and the reference semantic similarity;
and when the accuracy of the semantic similarity calculation model based on the problem pair is greater than a preset threshold value, finishing the training of the semantic similarity calculation model based on the problem pair.
6. The method of any of claims 1-4, wherein the coding layer comprises a bidirectional recurrent neural network layer, a first normalization layer, and a stacked bidirectional self-attention layer.
7. The method of any of claims 1-4, wherein the local interaction layer comprises a bi-directional multi-angle similarity analysis layer and a second normalization layer.
8. A method for answering a question, comprising:
acquiring a question to be replied;
determining a candidate question matching the question to be answered using a method of matching questions as claimed in any of claims 1-7;
and replying by using the candidate answer corresponding to the candidate question.
9. A problem matching device, comprising:
the first acquisition module is used for acquiring the problems to be matched;
the first matching module is used for matching the problems to be matched by using a dictionary tree;
the retrieval module is used for retrieving a plurality of candidate questions similar to the question to be matched from a preset question bank when the first matching module fails to match;
the second matching module is used for matching the problem to be matched with the candidate problem by using a trained semantic similarity calculation model based on the problem pair;
the semantic similarity calculation model based on the problem pairs comprises an input layer, an encoding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the encoding layer is used for performing semantic analysis on the problems in the problem pairs, determining the importance of each word included in the problems and learning the structural features of the problems, the local interaction layer is used for performing semantic relevance analysis on two of the problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
10. The apparatus of claim 9, further comprising:
and the updating module is used for updating the dictionary tree, the preset problem library and the corresponding index library.
11. The apparatus of claim 9, wherein the first matching module comprises:
the removing submodule is used for removing the tone words in the problem to be matched;
the unification submodule is used for unifying punctuations in the problem to be matched;
the replacing submodule is used for carrying out synonym replacement on the problem to be matched so as to generate a plurality of similar problems of the problem to be matched;
and the first matching submodule is used for respectively matching each similar problem by using the dictionary tree.
12. The apparatus of claim 9, wherein the second matching module comprises:
the group pair submodule is used for respectively forming a problem pair by each candidate problem and the problem to be matched;
an input submodule for inputting the question pairs into the question pair-based semantic similarity calculation model;
and the second matching submodule is used for matching the problem to be matched with the candidate problem according to the semantic similarity corresponding to each problem pair.
13. The apparatus according to any one of claims 9-12, further comprising:
the second acquisition module is used for acquiring a reference problem pair and the reference semantic similarity corresponding to the reference problem pair; wherein the reference problem pair comprises a first reference problem and a second reference problem;
a processing module, configured to perform word segmentation processing on the first reference problem and the second reference problem, respectively, so as to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem;
a first determining module, configured to determine a part of speech corresponding to each reference word in the first reference word set and the second reference word set, so as to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set;
a second determining module, configured to determine a synonym corresponding to each of the reference words in the first reference word set and the second reference word set, so as to generate a first reference synonym set corresponding to the first reference word set and a second reference synonym set corresponding to the second reference word set;
an input module, configured to input the first set of reference words, the second set of reference words, the first set of reference parts of speech, the second set of reference parts of speech, the first set of reference synonyms, and the second set of reference synonyms into the encoding layer of the problem pair-based semantic similarity calculation model;
the training module is used for training the parameters of the semantic similarity calculation model based on the question pairs according to the output of the output layer of the semantic similarity calculation model based on the question pairs and the reference semantic similarity;
and the completion module is used for completing the training of the semantic similarity calculation model based on the question pair when the accuracy of the semantic similarity calculation model based on the question pair is greater than a preset threshold value.
14. The apparatus of any of claims 9-12, wherein the encoding layer comprises a bidirectional recurrent neural network layer, a first normalization layer, and a stacked bidirectional self-attention layer.
15. The apparatus of any of claims 9-12, wherein the local interaction layer comprises a bi-directional multi-angle similarity analysis layer and a second normalization layer.
16. An apparatus for answering a question, the apparatus comprising:
the third acquisition module is used for acquiring the problem to be replied;
a third determination module, configured to determine a candidate question that matches the question to be answered, using the matching method for the question according to any one of claims 1-7;
and the reply module is used for replying by using the candidate answer corresponding to the candidate question.
17. A question-answering system, characterized in that the system comprises:
the question-answering interface is used for receiving input contents of a user and displaying generated reply contents;
the distribution agent is used for distributing the input content of the user to a corresponding reply device according to the type of the input content of the user;
the question replying device of claim 16, configured to receive the question to be replied sent by the distribution agent, and determine the corresponding answer from a preset question library.
18. An electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the computer program, performing the steps of:
acquiring a problem to be matched;
matching the problems to be matched by using a dictionary tree;
if the matching fails, searching a plurality of candidate questions similar to the question to be matched from a preset question bank;
matching the problem to be matched with the candidate problem by using a trained semantic similarity calculation model based on a problem pair;
the semantic similarity calculation model based on the problem pairs comprises an input layer, an encoding layer, a local interaction layer, an aggregation layer and an output layer, wherein the input layer is used for inputting word information of the problem pairs, the encoding layer is used for performing semantic analysis on the problems in the problem pairs, determining the importance of each word included in the problems and learning the structural features of the problems, the local interaction layer is used for performing semantic relevance analysis on two of the problems in the problem pairs, the aggregation layer is used for performing feature extraction and aggregation on the output of the local interaction layer, and the output layer is used for calculating the semantic similarity of the problem pairs.
19. The electronic device of claim 18, wherein before the electronic device obtains a question to be matched, the following steps are further performed:
and updating the dictionary tree, the preset problem library and the corresponding index library.
20. The electronic device of claim 18, wherein the electronic device matches the question to be matched using a dictionary tree, and specifically comprises the steps of:
removing the tone words in the question to be matched;
unifying punctuation marks in the problem to be matched;
performing synonym replacement on the problem to be matched to generate a plurality of similar problems of the problem to be matched;
and respectively matching each similar problem by using the dictionary tree.
21. The electronic device of claim 18, wherein the electronic device matches the question to be matched with the candidate question using a trained semantic similarity computation model based on question pairs, and specifically comprises the following steps:
forming a problem pair by combining each candidate problem with the problem to be matched;
inputting the question pair into the question pair-based semantic similarity calculation model;
and matching the question to be matched with the candidate question according to the semantic similarity corresponding to each question pair.
22. The electronic device of any one of claims 18-21, wherein in training the question-pair-based semantic similarity computation model, the electronic device performs the steps of:
acquiring a reference problem pair and a reference semantic similarity corresponding to the reference problem pair; wherein the reference problem pair comprises a first reference problem and a second reference problem;
performing word segmentation processing on the first reference problem and the second reference problem respectively to generate a first reference word set corresponding to the first reference problem and a second reference word set corresponding to the second reference problem;
determining a part of speech corresponding to each reference word in the first reference word set and the second reference word set to generate a first reference part of speech set corresponding to the first reference word set and a second reference part of speech set corresponding to the second reference word set;
determining synonyms corresponding to each of the reference words in the first and second sets of reference words to generate a first set of reference synonyms corresponding to the first set of reference words and a second set of reference synonyms corresponding to the second set of reference words;
inputting the first set of reference words, the second set of reference words, the first set of reference parts of speech, the second set of reference parts of speech, the first set of reference synonyms and the second set of reference synonyms into the coding layer of the problem pair-based semantic similarity calculation model;
training parameters of the semantic similarity calculation model based on the question pairs according to the output of the output layer of the semantic similarity calculation model based on the question pairs and the reference semantic similarity;
and when the accuracy of the semantic similarity calculation model based on the problem pair is greater than a preset threshold value, finishing the training of the semantic similarity calculation model based on the problem pair.
23. The electronic device of any of claims 18-21, wherein the encoding layer includes a bidirectional recurrent neural network layer, a first normalization layer, and a stacked bidirectional self-attention layer.
24. The electronic device of any of claims 18-21, wherein the local interaction layer includes a bi-directional multi-angle similarity analysis layer and a second normalization layer.
25. An electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the computer program, performing the steps of:
acquiring a question to be replied;
determining a candidate question matching the question to be answered using a method of matching questions as claimed in any of claims 1-7;
and replying by using the candidate answer corresponding to the candidate question.
26. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to perform the method of matching of questions according to any one of claims 1 to 7.
27. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to execute a method of replying to a question as claimed in claim 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911115389.3A CN112800170A (en) | 2019-11-14 | 2019-11-14 | Question matching method and device and question reply method and device |
PCT/CN2020/128016 WO2021093755A1 (en) | 2019-11-14 | 2020-11-11 | Matching method and apparatus for questions, and reply method and apparatus for questions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911115389.3A CN112800170A (en) | 2019-11-14 | 2019-11-14 | Question matching method and device and question reply method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112800170A true CN112800170A (en) | 2021-05-14 |
Family
ID=75803851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911115389.3A Pending CN112800170A (en) | 2019-11-14 | 2019-11-14 | Question matching method and device and question reply method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112800170A (en) |
WO (1) | WO2021093755A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113342842A (en) * | 2021-06-10 | 2021-09-03 | 南方电网数字电网研究院有限公司 | Semantic query method and device based on metering knowledge and computer equipment |
CN113947218A (en) * | 2021-09-16 | 2022-01-18 | 刘兆海 | Product after-sale intelligent management method and system based on big data |
CN114090747A (en) * | 2021-10-14 | 2022-02-25 | 特斯联科技集团有限公司 | Automatic question answering method, device, equipment and medium based on multiple semantic matching |
CN114372122A (en) * | 2021-12-08 | 2022-04-19 | 阿里云计算有限公司 | Information acquisition method, computing device and storage medium |
CN115129424A (en) * | 2022-07-07 | 2022-09-30 | 江苏红网技术股份有限公司 | Data asset management platform and method thereof |
CN115878924A (en) * | 2021-09-27 | 2023-03-31 | 小沃科技有限公司 | Data processing method, device, medium and electronic equipment based on double dictionary trees |
CN116910225A (en) * | 2023-09-13 | 2023-10-20 | 北京三五通联科技发展有限公司 | Active response method and system based on cloud platform |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026853B (en) * | 2019-12-02 | 2023-10-27 | 支付宝(杭州)信息技术有限公司 | Target problem determining method and device, server and customer service robot |
CN113221531B (en) * | 2021-06-04 | 2024-08-06 | 西安邮电大学 | Semantic matching method for multi-model dynamic collaboration |
CN113822034B (en) * | 2021-06-07 | 2024-04-19 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for replying text |
CN113591474B (en) * | 2021-07-21 | 2024-04-05 | 西北工业大学 | Repeated data detection method of Loc2vec model based on weighted fusion |
CN113536807B (en) * | 2021-08-03 | 2023-05-05 | 中国航空综合技术研究所 | Incomplete maximum matching word segmentation method based on semantics |
CN113792153B (en) * | 2021-08-25 | 2023-12-12 | 北京度商软件技术有限公司 | Question and answer recommendation method and device |
CN113836918A (en) * | 2021-09-29 | 2021-12-24 | 天翼物联科技有限公司 | Document searching method and device, computer equipment and computer readable storage medium |
CN114358016B (en) * | 2021-12-28 | 2024-10-29 | 中国科学技术大学 | Text matching method, device, equipment and storage medium |
CN114358023B (en) * | 2022-01-11 | 2023-08-22 | 平安科技(深圳)有限公司 | Intelligent question-answer recall method, intelligent question-answer recall device, computer equipment and storage medium |
CN114697280A (en) * | 2022-03-01 | 2022-07-01 | 西安博纳吉生物科技有限公司 | Instant messaging method for preset content |
CN116795953B (en) * | 2022-03-08 | 2024-06-25 | 腾讯科技(深圳)有限公司 | Question-answer matching method and device, computer readable storage medium and computer equipment |
CN114897183B (en) * | 2022-05-16 | 2023-06-13 | 北京百度网讯科技有限公司 | Question data processing method, training method and device of deep learning model |
CN115394293A (en) * | 2022-08-08 | 2022-11-25 | 湖北星纪时代科技有限公司 | Dialog system and method for implementing a dialog |
CN117668171A (en) * | 2023-10-16 | 2024-03-08 | 百度在线网络技术(北京)有限公司 | Text generation method, training device, electronic equipment and storage medium |
CN117875908B (en) * | 2024-03-08 | 2024-07-23 | 蒲惠智造科技股份有限公司 | Work order processing method and system based on enterprise management software SAAS |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092602A (en) * | 2016-02-18 | 2017-08-25 | 朗新科技股份有限公司 | A kind of auto-answer method and system |
CN108345585A (en) * | 2018-01-11 | 2018-07-31 | 浙江大学 | A kind of automatic question-answering method based on deep learning |
CN109582949A (en) * | 2018-09-14 | 2019-04-05 | 阿里巴巴集团控股有限公司 | Event element abstracting method, calculates equipment and storage medium at device |
US20190243900A1 (en) * | 2017-03-03 | 2019-08-08 | Tencent Technology (Shenzhen) Company Limited | Automatic questioning and answering processing method and automatic questioning and answering system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7618259B2 (en) * | 2004-05-13 | 2009-11-17 | Hewlett-Packard Development Company, L.P. | Worksheet wizard—system and method for creating educational worksheets |
CN101076184B (en) * | 2006-07-31 | 2011-09-21 | 腾讯科技(深圳)有限公司 | Method and system for realizing automatic reply |
TW200931212A (en) * | 2008-01-04 | 2009-07-16 | Compal Communications Inc | Alarm and managing method thereof |
CN102368246A (en) * | 2011-09-15 | 2012-03-07 | 张德长 | Automatic-answer robot system |
-
2019
- 2019-11-14 CN CN201911115389.3A patent/CN112800170A/en active Pending
-
2020
- 2020-11-11 WO PCT/CN2020/128016 patent/WO2021093755A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092602A (en) * | 2016-02-18 | 2017-08-25 | 朗新科技股份有限公司 | A kind of auto-answer method and system |
US20190243900A1 (en) * | 2017-03-03 | 2019-08-08 | Tencent Technology (Shenzhen) Company Limited | Automatic questioning and answering processing method and automatic questioning and answering system |
CN108345585A (en) * | 2018-01-11 | 2018-07-31 | 浙江大学 | A kind of automatic question-answering method based on deep learning |
CN109582949A (en) * | 2018-09-14 | 2019-04-05 | 阿里巴巴集团控股有限公司 | Event element abstracting method, calculates equipment and storage medium at device |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113342842A (en) * | 2021-06-10 | 2021-09-03 | 南方电网数字电网研究院有限公司 | Semantic query method and device based on metering knowledge and computer equipment |
CN113947218A (en) * | 2021-09-16 | 2022-01-18 | 刘兆海 | Product after-sale intelligent management method and system based on big data |
CN115878924A (en) * | 2021-09-27 | 2023-03-31 | 小沃科技有限公司 | Data processing method, device, medium and electronic equipment based on double dictionary trees |
CN115878924B (en) * | 2021-09-27 | 2024-03-12 | 小沃科技有限公司 | Data processing method, device, medium and electronic equipment based on double dictionary trees |
CN114090747A (en) * | 2021-10-14 | 2022-02-25 | 特斯联科技集团有限公司 | Automatic question answering method, device, equipment and medium based on multiple semantic matching |
CN114372122A (en) * | 2021-12-08 | 2022-04-19 | 阿里云计算有限公司 | Information acquisition method, computing device and storage medium |
CN115129424A (en) * | 2022-07-07 | 2022-09-30 | 江苏红网技术股份有限公司 | Data asset management platform and method thereof |
CN116910225A (en) * | 2023-09-13 | 2023-10-20 | 北京三五通联科技发展有限公司 | Active response method and system based on cloud platform |
CN116910225B (en) * | 2023-09-13 | 2023-11-21 | 北京三五通联科技发展有限公司 | Active response method and system based on cloud platform |
Also Published As
Publication number | Publication date |
---|---|
WO2021093755A1 (en) | 2021-05-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112800170A (en) | Question matching method and device and question reply method and device | |
CN111753060B (en) | Information retrieval method, apparatus, device and computer readable storage medium | |
US20220253477A1 (en) | Knowledge-derived search suggestion | |
US9720944B2 (en) | Method for facet searching and search suggestions | |
CN111767716B (en) | Method and device for determining enterprise multi-level industry information and computer equipment | |
CN112214593A (en) | Question and answer processing method and device, electronic equipment and storage medium | |
CN116775847A (en) | Question answering method and system based on knowledge graph and large language model | |
CN111159485B (en) | Tail entity linking method, device, server and storage medium | |
CN104471568A (en) | Learning-based processing of natural language questions | |
CN113569011B (en) | Training method, device and equipment of text matching model and storage medium | |
CN111078837A (en) | Intelligent question and answer information processing method, electronic equipment and computer readable storage medium | |
CN112115232A (en) | Data error correction method and device and server | |
CN113282711B (en) | Internet of vehicles text matching method and device, electronic equipment and storage medium | |
CN111460114A (en) | Retrieval method, device, equipment and computer readable storage medium | |
CN118296132B (en) | Customer service searching method and system based on intelligent large model | |
CN114491079A (en) | Knowledge graph construction and query method, device, equipment and medium | |
CN113886545A (en) | Knowledge question answering method, knowledge question answering device, computer readable medium and electronic equipment | |
CN113486143A (en) | User portrait generation method based on multi-level text representation and model fusion | |
CN117828024A (en) | Plug-in retrieval method, device, storage medium and equipment | |
CN113157892B (en) | User intention processing method, device, computer equipment and storage medium | |
CN110941713A (en) | Self-optimization financial information plate classification method based on topic model | |
CN115203206A (en) | Data content searching method and device, computer equipment and readable storage medium | |
CN113779981A (en) | Recommendation method and device based on pointer network and knowledge graph | |
CN114093447A (en) | Data asset recommendation method and device, computer equipment and storage medium | |
CN112507097A (en) | Method for improving generalization capability of question-answering system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |