CN110781284A - Knowledge graph-based question and answer method, device and storage medium - Google Patents
Knowledge graph-based question and answer method, device and storage medium Download PDFInfo
- Publication number
- CN110781284A CN110781284A CN201910885936.XA CN201910885936A CN110781284A CN 110781284 A CN110781284 A CN 110781284A CN 201910885936 A CN201910885936 A CN 201910885936A CN 110781284 A CN110781284 A CN 110781284A
- Authority
- CN
- China
- Prior art keywords
- question
- entity
- label
- accuracy
- artificial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000012549 training Methods 0.000 claims abstract description 51
- 239000013598 vector Substances 0.000 claims description 43
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000002372 labelling Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 238000005516 engineering process Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 239000003814 drug Substances 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003058 natural language processing Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a question-answering method, a device and a storage medium based on a knowledge graph, wherein the method comprises the steps of obtaining a question-answering sentence input by a user, segmenting the question-answering sentence, and respectively obtaining an entity element and a label corresponding to the entity element in the question-answering sentence through a trained entity element identification model and a label identification model; inputting the entity elements, the tags and the question and answer sentences into a Bayesian classifier, calculating the matching degree of each preset template and the question and answer sentences in the Bayesian classifier, and determining the preset template with the highest matching degree as a query template; and inputting the entity elements and the tags into a query template to obtain corresponding query sentences, and inputting the query sentences into a knowledge graph for querying to obtain corresponding question and answer results. According to the invention, the dependence degree on the training corpus is reduced through the mode, and the question-answering sentences of the users are deeply mined, so that the accuracy of the question-answering results output by the question-answering system is improved.
Description
Technical Field
The invention relates to the technical field of information processing, in particular to a question-answering method and device based on a knowledge graph and a storage medium.
Background
With the development of internet technology and the emergence of large-scale network data resources, people hope to accurately and quickly acquire valuable information from massive internet data, and the search-type question-answering system is promoted to be widely applied. Question-answering systems are an advanced form of information retrieval systems. It can answer the question posed by user in natural language with accurate and simple natural language. In the current artificial intelligence and natural language processing fields, the development and the perfection of a question-answering system are also a research direction which attracts much attention and has a wide development prospect.
At present, a traditional question-answering system is based on a certain question-answering corpus training model, natural language of a user is input into the trained model after being processed, and a result is obtained by inquiring similar corpora in the model. However, the accuracy of the question-answering system depends on the coverage of the training corpus, and when the questions input by the user are complicated, the question-answering results output by the traditional question-answering system are inaccurate.
Disclosure of Invention
The invention mainly aims to provide a question-answering method, a question-answering device and a storage medium based on a knowledge graph, and aims to solve the technical problem that a question-answering result output by a traditional question-answering system is inaccurate.
In order to achieve the aim, the invention provides a question-answering method based on a knowledge graph, which comprises the following steps:
acquiring a question and answer sentence input by a user, segmenting the question and answer sentence, and respectively acquiring an entity element in the question and answer sentence and a label corresponding to the entity element through a trained entity element identification model and a label identification model;
inputting the entity elements, the tags and the question and answer sentences into a Bayesian classifier, calculating the matching degree of each preset template in the Bayesian classifier and the question and answer sentences, and determining the preset template with the highest matching degree as a query template;
and inputting the entity elements and the tags into the query template to obtain corresponding query sentences, and inputting the query sentences into a knowledge graph for querying to obtain corresponding question and answer results.
Optionally, before the step of obtaining the question-answer sentences input by the user and performing word segmentation on the question-answer sentences, the method further includes:
acquiring a training corpus through a web crawler technology, and segmenting the training corpus;
receiving manual labeling entity elements and manual labeling labels for words obtained after word segmentation, and obtaining manual entity elements and manual labels corresponding to the words;
inputting the artificial entity elements and the training corpora into a preset entity element recognition model so as to train the entity element recognition model;
and inputting the artificial label and the training corpus into a preset label recognition model so as to train the label recognition model.
Optionally, the step of training the entity element recognition model includes:
extracting the training corpus through the entity element recognition model to obtain corresponding extracted entity elements;
determining entity elements which coincide with the extracted entity elements in the artificial entity elements as an entity element set;
calculating the accuracy of the entity element recognition model according to the entity element set, the artificial entity elements and the extracted entity elements;
and taking the entity element recognition model with the accuracy exceeding the preset first accuracy as the trained entity element recognition model.
Optionally, the step of calculating the accuracy of the entity element recognition model according to the entity element set, the artificial entity element and the extracted entity element includes:
dividing the entity element set by the extracted entity elements to obtain the accuracy of the entity element identification model;
dividing the set of entity elements by the artificial entity elements to obtain a recall rate of the entity element identification model;
calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to be used as a first F value of the entity element identification model;
and calculating the accuracy of the entity element identification model according to the accuracy, the recall rate and the first F value of the entity element identification model.
Optionally, the step of training the label recognition model includes:
extracting the training corpus through the label recognition model to obtain a corresponding extracted label;
determining a label which is coincident with the extracted label in the artificial labels as a label set;
calculating the accuracy of the label identification model according to the label set, the artificial label and the extracted label;
and taking the label recognition model with the accuracy exceeding the preset second accuracy as the trained label recognition model.
Optionally, the step of calculating the accuracy of the tag identification model according to the tag set, the artificial tag and the extracted tag comprises:
dividing the label set by the extracted labels to obtain the accuracy of the label identification model;
dividing the label set by the artificial label to obtain the recall rate of the label identification model;
calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to be used as a second F value of the tag identification model;
and calculating the accuracy of the label identification model according to the accuracy, the recall rate and the second F value of the label identification model.
Optionally, after the step of obtaining the artificial entity elements and the artificial labels corresponding to the words, the method includes:
taking an artificial entity element and an artificial label as input of a preset TransE algorithm, so that the artificial entity element and the artificial label are embedded into a low-dimensional vector space, and a corresponding vector template is generated;
and storing the vector template into a database to construct a corresponding knowledge graph.
Optionally, the step of inputting the query statement into a knowledge graph for querying to obtain a corresponding question-answer result includes:
vectorizing the query statement to generate a corresponding vector set;
and matching the vector set with a vector template in the knowledge graph to obtain a corresponding question and answer result.
Further, to achieve the above object, the present invention also provides an apparatus comprising: a memory, a processor, and a knowledge-graph based question-answering program stored on the memory and executable on the processor, the knowledge-graph based question-answering program when executed by the processor implementing the steps of the knowledge-graph based question-answering method as described above.
In addition, to achieve the above object, the present invention also provides a computer-readable storage medium having a knowledge-graph-based question-answering program stored thereon, which when executed by a processor implements the steps of the knowledge-graph-based question-answering method as described above.
The invention discloses a question-answering method, a device and a storage medium based on a knowledge graph, wherein the method comprises the steps of obtaining a question-answering sentence input by a user, segmenting the question-answering sentence, and respectively obtaining an entity element in the question-answering sentence and a label corresponding to the entity element through a trained entity element identification model and a label identification model; inputting the entity elements, the tags and the question and answer sentences into a Bayesian classifier, calculating the matching degree of each preset template and the question and answer sentences in the Bayesian classifier, and determining the preset template with the highest matching degree as a query template; and inputting the entity elements and the tags into the query template to obtain corresponding query sentences, and inputting the query sentences into a knowledge graph for querying to obtain corresponding question and answer results. The method comprises the steps of analyzing question-answering sentences of users of trained entity element recognition models and label recognition models, obtaining entity elements and labels of the question-answering sentences through training the entity element recognition models and the label recognition models, and deeply mining the question-answering sentences of the users; the most suitable query template is determined, the corresponding query statement is generated, the corresponding question and answer result is obtained according to the knowledge graph, the dependence degree of the whole process on the training corpus is reduced, the question and answer system is prevented from missing detection and error detection, and therefore the accuracy of the question and answer result output by the question and answer system is improved.
Drawings
FIG. 1 is a schematic diagram of an apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an embodiment of a knowledge-graph based question-answering method according to the present invention;
FIG. 3 is a schematic flow chart of a method for knowledge-graph based question answering according to another embodiment of the present invention;
FIG. 4 is a flow chart of a question-answering method based on a knowledge graph according to another embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the invention is a device which can be a mobile phone, a computer, a mobile computer and other terminal equipment with a storage function.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Wi-Fi module, and the like, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Optionally, the terminal may further include a camera, a Wi-Fi module, and the like, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 mainly includes an input unit such as a keyboard including a wireless keyboard and a wired keyboard, and is used to connect to the client and perform data communication with the client; and the processor 1001 may be configured to invoke the knowledge-graph based question-answering program stored in the memory 1005 and perform the following operations:
acquiring a question and answer sentence input by a user, segmenting the question and answer sentence, and respectively acquiring an entity element in the question and answer sentence and a label corresponding to the entity element through a trained entity element identification model and a label identification model;
inputting the entity elements, the tags and the question and answer sentences into a Bayesian classifier, calculating the matching degree of each preset template in the Bayesian classifier and the question and answer sentences, and determining the preset template with the highest matching degree as a query template;
and inputting the entity elements and the tags into the query template to obtain corresponding query sentences, and inputting the query sentences into a knowledge graph for querying to obtain corresponding question and answer results.
Further, processor 1001 may invoke a knowledge-graph based question-answering program stored in memory 1005, and also perform the following operations:
acquiring a training corpus through a web crawler technology, and segmenting the training corpus;
receiving manual labeling entity elements and manual labeling labels for words obtained after word segmentation, and obtaining manual entity elements and manual labels corresponding to the words;
inputting the artificial entity elements and the training corpora into a preset entity element recognition model so as to train the entity element recognition model;
and inputting the artificial label and the training corpus into a preset label recognition model so as to train the label recognition model.
Further, processor 1001 may invoke a knowledge-graph based question-answering program stored in memory 1005, and also perform the following operations:
extracting the training corpus through the entity element recognition model to obtain corresponding extracted entity elements;
determining entity elements which coincide with the extracted entity elements in the artificial entity elements as an entity element set;
calculating the accuracy of the entity element recognition model according to the entity element set, the artificial entity elements and the extracted entity elements;
and taking the entity element recognition model with the accuracy exceeding the preset first accuracy as the trained entity element recognition model.
Further, processor 1001 may invoke a knowledge-graph based question-answering program stored in memory 1005, and also perform the following operations:
dividing the entity element set by the extracted entity elements to obtain the accuracy of the entity element identification model;
dividing the set of entity elements by the artificial entity elements to obtain a recall rate of the entity element identification model;
calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to be used as a first F value of the entity element identification model;
and calculating the accuracy of the entity element identification model according to the accuracy, the recall rate and the first F value of the entity element identification model.
Further, processor 1001 may invoke a knowledge-graph based question-answering program stored in memory 1005, and also perform the following operations:
extracting the training corpus through the label recognition model to obtain a corresponding extracted label;
determining a label which is coincident with the extracted label in the artificial labels as a label set;
calculating the accuracy of the label identification model according to the label set, the artificial label and the extracted label;
and taking the label recognition model with the accuracy exceeding the preset second accuracy as the trained label recognition model.
Further, processor 1001 may invoke a knowledge-graph based question-answering program stored in memory 1005, and also perform the following operations:
dividing the label set by the extracted labels to obtain the accuracy of the label identification model;
dividing the label set by the artificial label to obtain the recall rate of the label identification model;
calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to be used as a second F value of the tag identification model;
and calculating the accuracy of the label identification model according to the accuracy, the recall rate and the second F value of the label identification model.
Further, processor 1001 may invoke a knowledge-graph based question-answering program stored in memory 1005, and also perform the following operations:
taking an artificial entity element and an artificial label as input of a preset TransE algorithm, so that the artificial entity element and the artificial label are embedded into a low-dimensional vector space, and a corresponding vector template is generated;
and storing the vector template into a database to construct a corresponding knowledge graph.
Further, processor 1001 may invoke a knowledge-graph based question-answering program stored in memory 1005, and also perform the following operations:
vectorizing the query statement to generate a corresponding vector set;
and matching the vector set with a vector template in the knowledge graph to obtain a corresponding question and answer result.
The specific embodiment of the device is basically the same as the following embodiments of the knowledge-graph-based question-answering method, and the detailed description is omitted here.
Referring to fig. 1, fig. 1 is a schematic diagram of an apparatus structure of a hardware operating environment according to an embodiment of the present invention. The question-answering method based on the knowledge graph provided by the embodiment comprises the following steps:
step S10, obtaining question and answer sentences input by a user, segmenting the question and answer sentences, and respectively obtaining entity elements in the question and answer sentences and labels corresponding to the entity elements through a trained entity element recognition model and a trained label recognition model;
the question and answer sentences input by the user are obtained, it should be understood that the question and answer sentences expressed by the user can be obtained in a voice recognition mode, and the question and answer sentences input by the user can also be obtained in other modes, which is not specifically limited in this embodiment. Segmenting the obtained question and answer sentences based on an NLP technology, and extracting the entity elements of the question and answer sentences through an entity element identification model after segmentation; and extracting the labels in the question-answering sentences through a label identification model after word segmentation. In order to elaborately describe the relationship between the question and answer sentence and the entity element and the tag, taking the question and answer sentence as "what relationship the yellow dawn and the growing medicine have", the question and answer sentence "yellow dawn" is the entity element, the corresponding tag is the person, "growing medicine" is the entity element, and the corresponding tag is the company.
Step S20, inputting the entity elements, the labels and the question and answer sentences into a Bayesian classifier, calculating the matching degree of each preset template in the Bayesian classifier and the question and answer sentences, and determining the preset template with the highest matching degree as a query template;
inputting a question and answer sentence input by a user, entity elements corresponding to the question and answer sentence and a label into a preset Bayes classifier, calculating the probability that an object belongs to a certain class by the Bayes formula according to the prior probability of the object, and selecting the class with the maximum posterior probability as the class to which the object belongs. Preferably, three templates, namely a relationship query template, an entity query template and an attribute query template, are preset in the bayesian classifier, and the template with the highest matching degree is determined as the query template by calculating the matching degree of each preset template with entity elements, tags and question-answer sentences and combining the context semantics of the question-answer sentences.
Step S30, inputting the entity elements and the labels into the query template to obtain corresponding query sentences, and inputting the query sentences into a knowledge graph for query to obtain corresponding question and answer results.
And after the query template is obtained, inputting the entity elements and the tags corresponding to the question and answer sentences into the query template to obtain the corresponding query sentences. In addition, a knowledge graph is preset in the embodiment, the query statement is input into the knowledge graph, and a corresponding question and answer result is obtained in a vector feature matching mode.
In this embodiment, a question and answer sentence input by a user is obtained, the question and answer sentence is segmented, and an entity element in the question and answer sentence and a label corresponding to the entity element are respectively obtained through a trained entity element recognition model and a trained label recognition model; inputting the entity elements, the tags and the question and answer sentences into a Bayesian classifier, calculating the matching degree of each preset template in the Bayesian classifier and the question and answer sentences, and determining the preset template with the highest matching degree as a query template; and inputting the entity elements and the tags into the query template to obtain corresponding query sentences, and inputting the query sentences into a knowledge graph for querying to obtain corresponding question and answer results. Analyzing the question and answer sentences of the trained entity element recognition model and label recognition model users, training the entity element recognition model and label recognition model to obtain the entity elements and labels of the question and answer sentences, and deeply mining the question and answer sentences of the users; the most suitable query template is determined, the corresponding query statement is generated, the corresponding question and answer result is obtained according to the knowledge graph, the dependence degree of the whole process on the training corpus is reduced, the question and answer system is prevented from missing detection and error detection, and therefore the accuracy of the question and answer result output by the question and answer system is improved.
Further, please refer to fig. 3, fig. 3 is a schematic flow chart of a knowledge-graph-based question-answering method according to another embodiment of the present invention. Step S10 is to obtain a question-answer sentence input by the user, and before performing word segmentation on the question-answer sentence, the method further includes:
step S40, acquiring a corpus through a web crawler technology, and segmenting the corpus;
before all steps, the entity element recognition model and the label recognition model need to be trained by using the existing data. Firstly, a web crawler technology is utilized to acquire a large amount of information from an existing database as a corpus and perform word segmentation on the acquired corpus, wherein the web crawler technology is a program or script for automatically capturing world wide web information according to a certain rule, and can automatically update an information acquisition mode and an information retrieval mode of stored contents. For example, for the question and answer sentence "what relationship between yellow dawn and birth medicine exists", a crawler may be performed from a related database such as a national enterprise information bulletin system, a news database, and an enterprise credit database to obtain a related corpus.
Step S50, receiving the manual labeling entity elements and the manual labeling labels of the words obtained after word segmentation, and obtaining the manual entity elements and the manual labels corresponding to the words;
after segmenting the training corpus, manually marking entity elements and labels on the obtained words, and particularly, if the entity elements comprise a plurality of Chinese characters, manually marking the labels at preset positions of the Chinese characters; and as an embodiment, the type of the tag may be determined according to the type of the preset query template.
For example, the preset query templates include a relational query template, an entity query template, and an attribute query template, the entity elements are "yellow dawn" and "long-living medicine", and the results after manual labeling are "yellow person lung bright person" and "long-living company medical company".
Step S60, inputting the artificial entity element and the training corpus into a preset entity element recognition model to train the entity element recognition model;
for the training of the entity element recognition model, after the entity elements are marked manually, the manual entity elements and the training corpus are input into the preset entity element recognition model, and the preset entity element recognition model is trained.
And step S70, inputting the artificial label and the training corpus into a preset label recognition model to train the label recognition model.
For the training of the label recognition model, after the manual labeling of the labels is completed, the manual labels and the training corpus are input into the preset label recognition model, and the preset label recognition model is trained.
In the embodiment, a large amount of training corpora are obtained by using a crawler technology, so that data required by training are provided for the model, entity element labeling and label labeling are carried out on the training corpora in an artificial standard mode, and an entity element recognition model and a label recognition model are trained, so that the accuracy of extracting entity elements by the entity element recognition model and the accuracy of extracting labels by the label recognition model are ensured.
Further, the step of training the entity element recognition model comprises:
step S61, extracting the training corpus through the entity element recognition model to obtain a corresponding extracted entity element;
in this embodiment, the corpus is input into the entity element recognition model, the corpus is extracted through the entity element recognition model, and the entity elements extracted by the entity element recognition model are used as extracted entity elements. It should be understood that, since the accuracy of the entity element recognition model has not been determined, the entity element recognition model may extract an erroneous entity element, i.e., an entity element that does not belong to the corpus.
Step S62, determining the entity elements which are overlapped with the extracted entity elements in the artificial entity elements as an entity element set;
and in the step, entity elements are marked manually on the training corpus, the manual entity elements are compared with the extracted entity elements, and entity elements which are overlapped with the extracted entity elements in the manual entity elements are used as entity element sets. It is easy to understand that the entity elements in the entity element set must be correct entity elements, i.e. entity elements that must belong to the input corpus.
Step S63, calculating the accuracy of the entity element recognition model according to the entity element set, the artificial entity elements and the extracted entity elements;
and after the entity element set is obtained, calculating according to the entity element set, the artificial entity elements and the extracted entity elements by using a preset formula to obtain the accuracy of the trained entity element recognition model.
And step S64, taking the entity element recognition model with the accuracy exceeding the preset first accuracy as the trained entity element recognition model.
In this embodiment, if the accuracy of the obtained entity element recognition model does not exceed the preset first accuracy, which means that the entity element recognition model does not extract the entity element accurately enough, and an erroneous entity is extracted, the entity element recognition model continues to be trained by using the training corpus as input until the accuracy of the entity element recognition model exceeds the preset first accuracy.
In the embodiment, the entity element recognition model is trained by utilizing the training corpus and the artificial entity elements, and the entity element recognition model after training is ensured to meet the preset accuracy by calculating the accuracy of the entity element recognition model, so that the entity elements in the question and answer sentences can be accurately extracted by the entity element recognition model.
Further, the step of calculating the accuracy of the entity element recognition model according to the entity element set, the artificial entity elements and the extracted entity elements comprises:
step S631, dividing the entity element set by the extracted entity elements to obtain the accuracy of the entity element identification model;
in this embodiment, since the entity elements in the entity element set must be correct entity elements, the entity elements extracted by the entity element recognition model may not belong to the entity elements of the corpus; therefore, the value of the entity element in the entity element set is divided by the value of the entity element extracted by the entity element identification model, and the calculation result is taken as the accuracy of the entity element identification model.
Step S632, dividing the entity element set by the artificial entity element to obtain the recall rate of the entity element identification model;
in this embodiment, the numerical value of the entity element in the entity element set is divided by the numerical value of the manually labeled entity element, and the calculation result is used as the recall rate of the entity element identification model.
Step S633, calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to be used as a first F value of the entity element recognition model;
and after the recall rate and the accuracy rate of the entity element identification model are obtained, multiplying the accuracy rate by the recall rate, adding the accuracy rate to the recall rate, and dividing the product of the accuracy rate and the recall rate. Multiplying the obtained calculation result by a preset value, preferably 2, to obtain a first F value of the entity element identification model, i.e. the first F value is 2 × (accuracy × recall)/(accuracy + recall).
Step S634, calculating the accuracy of the entity element recognition model according to the accuracy, the recall ratio and the first F value of the entity element recognition model.
After obtaining the accuracy, the recall ratio and the first F value, the accuracy of the entity element recognition model can be obtained according to the three factors, specifically, the accuracy can be obtained by setting different weight calculations for the accuracy, the recall ratio and the first F value of the entity element recognition model, and it is easy to understand that the design of the weight ratio corresponding to each numerical value can be planned by a developer.
In this embodiment, the accuracy, the recall rate and the first F value of the entity element identification model are calculated according to the entity element set and the extracted entity elements, and the accuracy of the entity element identification model is obtained according to the calculation of the accuracy, the recall rate and the first F value.
Further, the step of training the label recognition model comprises:
step S71, extracting the training corpus through the label recognition model to obtain a corresponding extraction label;
in this embodiment, the corpus is input into the tag recognition model, the corpus is extracted through the tag recognition model, and the tag extracted by the tag recognition model is used as the extraction tag corresponding to the corpus. It should be understood that, since the accuracy of the tag recognition model is not yet determined, the tag recognition model may extract a wrong tag, i.e., a tag that does not belong to the corpus.
Step S72, determining the label coincident with the extracted label in the artificial label as a label set;
and because the training corpus is labeled manually in the steps, comparing the manual label with the extraction label, and taking the label which is superposed with the extraction label in the manual label as a label set. It is easy to understand that the labels in the label set must be correct labels, i.e. labels that must belong to the input corpus.
Step S73, calculating the accuracy of the label identification model according to the label set, the artificial label and the extracted label;
and after the label set is obtained, calculating according to the label set, the artificial labels and the extracted labels by using a preset formula to obtain the accuracy of the trained label recognition model.
And step S74, taking the label recognition model with the accuracy exceeding the preset second accuracy as the trained label recognition model.
In this embodiment, if the accuracy of the obtained tag identification model does not exceed the preset second accuracy, which means that the tag identification model does not extract the tag accurately enough, and an error tag is extracted, the tag identification model continues to be trained by using the training corpus as the input until the accuracy of the tag identification model exceeds the preset second accuracy.
In the embodiment, the label recognition model is trained by utilizing the training corpus and the artificial labels, and the label recognition model after training is ensured to accord with the preset accuracy degree by calculating the accuracy degree of the label recognition model, so that the label in the question and answer sentence can be accurately extracted by the label recognition model.
Further, the step of calculating the accuracy of the tag identification model according to the tag set, the artificial tag and the extracted tag comprises:
step S731, dividing the label set by the extracted labels to obtain the accuracy of the label identification model;
in this embodiment, since the labels in the label set must be correct labels, the labels extracted by the label recognition model may not belong to the labels of the corpus; therefore, the numerical value of the label in the label set is divided by the numerical value of the label extracted by the label identification model, and the calculation result is used as the accuracy of the label identification model.
Step S732, dividing the label set by the artificial label to obtain the recall rate of the label identification model;
in this embodiment, the numerical value of the tag in the tag set is divided by the numerical value of the manually labeled tag, and the calculation result is used as the recall rate of the tag identification model.
Step S733, calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to serve as a second F value of the tag identification model;
and after the recall rate and the accuracy rate of the label identification model are obtained, multiplying the accuracy rate and the recall rate, adding the accuracy rate and the recall rate, and dividing the product of the accuracy rate and the recall rate. And multiplying the obtained calculation result by a preset value to serve as a second F value of the tag identification model, wherein the preset value is preferably 2, that is, the second F value is 2 × (accuracy × recall)/(accuracy + recall).
Step S734, calculating an accuracy of the tag identification model according to the accuracy, the recall ratio and the second F value of the tag identification model.
After the accuracy, the recall rate and the second F value are obtained, the accuracy of the tag identification model can be obtained according to the three factors, specifically, the accuracy is obtained by setting different weights for the accuracy, the recall rate and the second F value of the tag identification model, and it is easy to understand that the design of the weight ratio corresponding to each numerical value can be formulated by developers.
According to the embodiment, the accuracy, the recall rate and the second F value of the label identification model are calculated according to the label set and the extracted labels, and the accuracy of the label identification model is obtained according to the accuracy, the recall rate and the second F value.
Further, referring to fig. 4, fig. 4 is a schematic flow chart of a knowledge-graph-based question-answering method according to another embodiment of the present invention. After the step S50 obtains the artificial entity element and the artificial label corresponding to the word, the method further includes:
step S80, taking an artificial entity element and an artificial label as the input of a preset TransE algorithm, so that the artificial entity element and the artificial label are embedded into a low-dimensional vector space to generate a corresponding vector template;
in the embodiment, a TransE algorithm is also preset, and the TransE algorithm is a distributed vector representation based on entities and relations. Inputting the artificial entity elements and the artificial labels into a preset TransE algorithm, and embedding the artificial entity elements and the artificial labels into a low-dimensional vector space through the TransE algorithm to generate corresponding vector templates.
And step S90, storing the vector template into a map entry database to construct a corresponding knowledge map.
After the vector templates are obtained, the vector templates are stored in the map entry database, and the corresponding knowledge graph is constructed according to the plurality of vector templates stored in the map entry database, which is not described in this embodiment of the construction method of the knowledge graph.
According to the embodiment, the vector template is obtained according to the artificial entity elements and the artificial labels through a preset TransE algorithm, and the corresponding knowledge graph is constructed according to the vector template, so that the comprehensiveness and the accuracy of the data of the knowledge graph are guaranteed.
Further, the step of inputting the query statement into a knowledge graph for querying to obtain a corresponding question and answer result includes:
step S31, vectorizing the query statement to generate a corresponding vector set;
in this embodiment, after the query statement is obtained, the query statement is vectorized through an NLP algorithm to generate a corresponding vector set, it should be understood that vectorization of the query statement may also be implemented through other manners, and this embodiment is not limited herein.
And step S32, matching the vector set with the vector template in the knowledge graph to obtain a corresponding question and answer result.
After a vector set is obtained, matching the vector set with a vector template in the knowledge graph, specifically, determining the vector template which is most matched with the vector set in the knowledge graph by calculating the matching degree between vectors, and then analyzing the vector template to obtain a corresponding question and answer result.
The query statement is vectorized, and the question and answer result in the knowledge graph is more accurately determined through matching of the vector set and the vector template, so that the accuracy of the question and answer result is ensured.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a knowledge-graph-based question-answering program is stored on the computer-readable storage medium, and when executed by a processor, the knowledge-graph-based question-answering program implements the operation of the knowledge-graph-based question-answering method as described above.
The specific embodiment of the computer-readable storage medium of the present invention is substantially the same as the above-mentioned embodiments of the knowledge-graph-based question-answering method, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A question-answering method based on a knowledge graph is characterized by comprising the following steps:
acquiring a question and answer sentence input by a user, segmenting the question and answer sentence, and respectively acquiring an entity element in the question and answer sentence and a label corresponding to the entity element through a trained entity element identification model and a label identification model;
inputting the entity elements, the tags and the question and answer sentences into a Bayesian classifier, calculating the matching degree of each preset template in the Bayesian classifier and the question and answer sentences, and determining the preset template with the highest matching degree as a query template;
and inputting the entity elements and the tags into the query template to obtain corresponding query sentences, and inputting the query sentences into a knowledge graph for querying to obtain corresponding question and answer results.
2. The knowledge-graph-based question-answering method according to claim 1, wherein before the step of obtaining the question-answering sentence input by the user and segmenting the question-answering sentence, the method further comprises:
acquiring a training corpus through a web crawler technology, and segmenting the training corpus;
receiving manual labeling entity elements and manual labeling labels for words obtained after word segmentation, and obtaining manual entity elements and manual labels corresponding to the words;
inputting the artificial entity elements and the training corpora into a preset entity element recognition model so as to train the entity element recognition model;
and inputting the artificial label and the training corpus into a preset label recognition model so as to train the label recognition model.
3. The knowledge-graph-based question-answering method according to claim 2, wherein the step of training the entity element recognition model comprises:
extracting the training corpus through the entity element recognition model to obtain corresponding extracted entity elements;
determining entity elements which coincide with the extracted entity elements in the artificial entity elements as an entity element set;
calculating the accuracy of the entity element recognition model according to the entity element set, the artificial entity elements and the extracted entity elements;
and taking the entity element recognition model with the accuracy exceeding the preset first accuracy as the trained entity element recognition model.
4. The knowledge-graph-based question-answering method according to claim 3, wherein the step of calculating the accuracy of the entity element recognition model based on the entity element set, the artificial entity elements and the extracted entity elements comprises:
dividing the entity element set by the extracted entity elements to obtain the accuracy of the entity element identification model;
dividing the set of entity elements by the artificial entity elements to obtain a recall rate of the entity element identification model;
calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to be used as a first F value of the entity element identification model;
and calculating the accuracy of the entity element identification model according to the accuracy, the recall rate and the first F value of the entity element identification model.
5. The knowledge-graph-based question-answering method according to claim 2, wherein the step of training the tag recognition model comprises:
extracting the training corpus through the label recognition model to obtain a corresponding extracted label;
determining a label which is coincident with the extracted label in the artificial labels as a label set;
calculating the accuracy of the label identification model according to the label set, the artificial label and the extracted label;
and taking the label recognition model with the accuracy exceeding the preset second accuracy as the trained label recognition model.
6. The knowledge-graph-based question-answering method according to claim 5, wherein the step of calculating the accuracy of the tag recognition model based on the tag set, the artificial tags and the extracted tags comprises:
dividing the label set by the extracted labels to obtain the accuracy of the label identification model;
dividing the label set by the artificial label to obtain the recall rate of the label identification model;
calculating a product of the accuracy and the recall rate and a sum of the accuracy and the recall rate, dividing the product by the sum, and multiplying an obtained calculation result by a preset numerical value to be used as a second F value of the tag identification model;
and calculating the accuracy of the label identification model according to the accuracy, the recall rate and the second F value of the label identification model.
7. The knowledge-graph-based question-answering method according to claim 2, wherein the step of obtaining the artificial entity elements and the artificial labels corresponding to the words is followed by:
taking an artificial entity element and an artificial label as input of a preset TransE algorithm, so that the artificial entity element and the artificial label are embedded into a low-dimensional vector space, and a corresponding vector template is generated;
and storing the vector template into a database to construct a corresponding knowledge graph.
8. The knowledge-graph-based question-answering method according to claim 7, wherein the step of inputting the query sentence into the knowledge graph for query to obtain the corresponding question-answering result comprises:
vectorizing the query statement to generate a corresponding vector set;
and matching the vector set with a vector template in the knowledge graph to obtain a corresponding question and answer result.
9. An apparatus, characterized in that the apparatus comprises: a memory, a processor, and a knowledge-graph based question-answering program stored on the memory and executable on the processor, the knowledge-graph based question-answering program configured to implement the steps of the knowledge-graph based question-answering method according to any one of claims 1 to 8.
10. A storage medium having stored thereon a knowledge-graph based question-answering program which, when executed by a processor, implements the steps of the knowledge-graph based question-answering method according to any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910885936.XA CN110781284B (en) | 2019-09-18 | 2019-09-18 | Knowledge graph-based question and answer method, device and storage medium |
PCT/CN2019/117583 WO2021051558A1 (en) | 2019-09-18 | 2019-11-12 | Knowledge graph-based question and answer method and apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910885936.XA CN110781284B (en) | 2019-09-18 | 2019-09-18 | Knowledge graph-based question and answer method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110781284A true CN110781284A (en) | 2020-02-11 |
CN110781284B CN110781284B (en) | 2024-05-28 |
Family
ID=69383813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910885936.XA Active CN110781284B (en) | 2019-09-18 | 2019-09-18 | Knowledge graph-based question and answer method, device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110781284B (en) |
WO (1) | WO2021051558A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111597321A (en) * | 2020-07-08 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Question answer prediction method and device, storage medium and electronic equipment |
CN111914074A (en) * | 2020-07-16 | 2020-11-10 | 华中师范大学 | Method and system for generating limited field conversation based on deep learning and knowledge graph |
CN112182178A (en) * | 2020-09-25 | 2021-01-05 | 北京字节跳动网络技术有限公司 | Intelligent question answering method, device, equipment and readable storage medium |
CN112507135A (en) * | 2020-12-17 | 2021-03-16 | 深圳市一号互联科技有限公司 | Visual knowledge graph query template construction method, device, system and storage medium |
CN113254635A (en) * | 2021-04-14 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Data processing method, device and storage medium |
CN114090620A (en) * | 2022-01-19 | 2022-02-25 | 支付宝(杭州)信息技术有限公司 | Query request processing method and device |
CN115186780A (en) * | 2022-09-14 | 2022-10-14 | 江西风向标智能科技有限公司 | Discipline knowledge point classification model training method, system, storage medium and equipment |
CN116975657A (en) * | 2023-09-25 | 2023-10-31 | 中国人民解放军军事科学院国防科技创新研究院 | Instant advantage window mining method and device based on manual experience |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113221573A (en) * | 2021-05-31 | 2021-08-06 | 平安科技(深圳)有限公司 | Entity classification method and device, computing equipment and storage medium |
CN113535917A (en) * | 2021-06-30 | 2021-10-22 | 山东师范大学 | Intelligent question-answering method and system based on travel knowledge map |
CN114357194B (en) * | 2022-01-11 | 2024-10-25 | 平安科技(深圳)有限公司 | Seed data expansion method and device, computer equipment and storage medium |
CN114860954A (en) * | 2022-04-28 | 2022-08-05 | 北京明略昭辉科技有限公司 | Entity linking method, device, equipment and medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105068661A (en) * | 2015-09-07 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and system based on artificial intelligence |
CN105868313A (en) * | 2016-03-25 | 2016-08-17 | 浙江大学 | Mapping knowledge domain questioning and answering system and method based on template matching technique |
CN107766483A (en) * | 2017-10-13 | 2018-03-06 | 华中科技大学 | The interactive answering method and system of a kind of knowledge based collection of illustrative plates |
CN107992585A (en) * | 2017-12-08 | 2018-05-04 | 北京百度网讯科技有限公司 | Universal tag method for digging, device, server and medium |
CN109033374A (en) * | 2018-07-27 | 2018-12-18 | 四川长虹电器股份有限公司 | Knowledge mapping search method based on Bayes classifier |
CN109308321A (en) * | 2018-11-27 | 2019-02-05 | 烟台中科网络技术研究所 | A kind of knowledge question answering method, knowledge Q-A system and computer readable storage medium |
CN109492077A (en) * | 2018-09-29 | 2019-03-19 | 北明智通(北京)科技有限公司 | The petrochemical field answering method and system of knowledge based map |
CN109815318A (en) * | 2018-12-24 | 2019-05-28 | 平安科技(深圳)有限公司 | The problems in question answering system answer querying method, system and computer equipment |
US20190228098A1 (en) * | 2018-01-19 | 2019-07-25 | International Business Machines Corporation | Facilitating answering questions involving reasoning over quantitative information |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140363082A1 (en) * | 2013-06-09 | 2014-12-11 | Apple Inc. | Integrating stroke-distribution information into spatial feature extraction for automatic handwriting recognition |
CN107066541A (en) * | 2017-03-13 | 2017-08-18 | 平安科技(深圳)有限公司 | The processing method and system of customer service question and answer data |
CN108491433B (en) * | 2018-02-09 | 2022-05-03 | 平安科技(深圳)有限公司 | Chat response method, electronic device and storage medium |
CN108959366B (en) * | 2018-05-21 | 2020-11-17 | 宁波薄言信息技术有限公司 | Open question-answering method |
CN110032632A (en) * | 2019-04-04 | 2019-07-19 | 平安科技(深圳)有限公司 | Intelligent customer service answering method, device and storage medium based on text similarity |
-
2019
- 2019-09-18 CN CN201910885936.XA patent/CN110781284B/en active Active
- 2019-11-12 WO PCT/CN2019/117583 patent/WO2021051558A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105068661A (en) * | 2015-09-07 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and system based on artificial intelligence |
CN105868313A (en) * | 2016-03-25 | 2016-08-17 | 浙江大学 | Mapping knowledge domain questioning and answering system and method based on template matching technique |
CN107766483A (en) * | 2017-10-13 | 2018-03-06 | 华中科技大学 | The interactive answering method and system of a kind of knowledge based collection of illustrative plates |
CN107992585A (en) * | 2017-12-08 | 2018-05-04 | 北京百度网讯科技有限公司 | Universal tag method for digging, device, server and medium |
US20190228098A1 (en) * | 2018-01-19 | 2019-07-25 | International Business Machines Corporation | Facilitating answering questions involving reasoning over quantitative information |
CN109033374A (en) * | 2018-07-27 | 2018-12-18 | 四川长虹电器股份有限公司 | Knowledge mapping search method based on Bayes classifier |
CN109492077A (en) * | 2018-09-29 | 2019-03-19 | 北明智通(北京)科技有限公司 | The petrochemical field answering method and system of knowledge based map |
CN109308321A (en) * | 2018-11-27 | 2019-02-05 | 烟台中科网络技术研究所 | A kind of knowledge question answering method, knowledge Q-A system and computer readable storage medium |
CN109815318A (en) * | 2018-12-24 | 2019-05-28 | 平安科技(深圳)有限公司 | The problems in question answering system answer querying method, system and computer equipment |
Non-Patent Citations (1)
Title |
---|
叶帅: "基于Neo4j的煤矿领域知识图谱构建及查询方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 2019, pages 021 - 502 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111597321A (en) * | 2020-07-08 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Question answer prediction method and device, storage medium and electronic equipment |
CN111597321B (en) * | 2020-07-08 | 2024-06-11 | 腾讯科技(深圳)有限公司 | Prediction method and device of answers to questions, storage medium and electronic equipment |
CN111914074B (en) * | 2020-07-16 | 2023-06-20 | 华中师范大学 | Method and system for generating dialogue in limited field based on deep learning and knowledge graph |
CN111914074A (en) * | 2020-07-16 | 2020-11-10 | 华中师范大学 | Method and system for generating limited field conversation based on deep learning and knowledge graph |
CN112182178A (en) * | 2020-09-25 | 2021-01-05 | 北京字节跳动网络技术有限公司 | Intelligent question answering method, device, equipment and readable storage medium |
CN112507135A (en) * | 2020-12-17 | 2021-03-16 | 深圳市一号互联科技有限公司 | Visual knowledge graph query template construction method, device, system and storage medium |
CN112507135B (en) * | 2020-12-17 | 2021-11-16 | 深圳市一号互联科技有限公司 | Knowledge graph query template construction method, device, system and storage medium |
CN113254635A (en) * | 2021-04-14 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Data processing method, device and storage medium |
CN114090620A (en) * | 2022-01-19 | 2022-02-25 | 支付宝(杭州)信息技术有限公司 | Query request processing method and device |
CN114090620B (en) * | 2022-01-19 | 2022-09-27 | 支付宝(杭州)信息技术有限公司 | Query request processing method and device |
CN115186780B (en) * | 2022-09-14 | 2022-12-06 | 江西风向标智能科技有限公司 | Discipline knowledge point classification model training method, system, storage medium and equipment |
CN115186780A (en) * | 2022-09-14 | 2022-10-14 | 江西风向标智能科技有限公司 | Discipline knowledge point classification model training method, system, storage medium and equipment |
CN116975657A (en) * | 2023-09-25 | 2023-10-31 | 中国人民解放军军事科学院国防科技创新研究院 | Instant advantage window mining method and device based on manual experience |
CN116975657B (en) * | 2023-09-25 | 2023-11-28 | 中国人民解放军军事科学院国防科技创新研究院 | Instant advantage window mining method and device based on manual experience |
Also Published As
Publication number | Publication date |
---|---|
CN110781284B (en) | 2024-05-28 |
WO2021051558A1 (en) | 2021-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110781284B (en) | Knowledge graph-based question and answer method, device and storage medium | |
CN110837550B (en) | Knowledge graph-based question answering method and device, electronic equipment and storage medium | |
WO2020232861A1 (en) | Named entity recognition method, electronic device and storage medium | |
CN110781276A (en) | Text extraction method, device, equipment and storage medium | |
CN110727779A (en) | Question-answering method and system based on multi-model fusion | |
CN109446885B (en) | Text-based component identification method, system, device and storage medium | |
WO2021151270A1 (en) | Method and apparatus for extracting structured data from image, and device and storage medium | |
CN111198948A (en) | Text classification correction method, device and equipment and computer readable storage medium | |
CN108038208B (en) | Training method and device of context information recognition model and storage medium | |
CN108776677B (en) | Parallel sentence library creating method and device and computer readable storage medium | |
CN111191012A (en) | Knowledge graph generation apparatus, method and computer program product thereof | |
CN111143556A (en) | Software function point automatic counting method, device, medium and electronic equipment | |
CN112464927B (en) | Information extraction method, device and system | |
CN108399157A (en) | Dynamic abstracting method, server and the readable storage medium storing program for executing of entity and relation on attributes | |
US12056184B2 (en) | Method and apparatus for generating description information of an image, electronic device, and computer readable storage medium | |
CN110795942A (en) | Keyword determination method and device based on semantic recognition and storage medium | |
CN114780701A (en) | Automatic question-answer matching method, device, computer equipment and storage medium | |
CN114092948A (en) | Bill identification method, device, equipment and storage medium | |
CN115248890A (en) | User interest portrait generation method and device, electronic equipment and storage medium | |
CN112884009A (en) | Classification model training method and system | |
CN112580620A (en) | Sign picture processing method, device, equipment and medium | |
CN112100355A (en) | Intelligent interaction method, device and equipment | |
CN112035668A (en) | Event subject recognition model optimization method, device and equipment and readable storage medium | |
CN114842982B (en) | Knowledge expression method, device and system for medical information system | |
CN112860860A (en) | Method and device for answering questions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |