[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112380865A - Method, device and storage medium for identifying entity in text - Google Patents

Method, device and storage medium for identifying entity in text Download PDF

Info

Publication number
CN112380865A
CN112380865A CN202011248130.9A CN202011248130A CN112380865A CN 112380865 A CN112380865 A CN 112380865A CN 202011248130 A CN202011248130 A CN 202011248130A CN 112380865 A CN112380865 A CN 112380865A
Authority
CN
China
Prior art keywords
candidate
entities
candidate entity
determining
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011248130.9A
Other languages
Chinese (zh)
Inventor
吕荣荣
彭力
陈帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202011248130.9A priority Critical patent/CN112380865A/en
Publication of CN112380865A publication Critical patent/CN112380865A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a method, an apparatus, and a storage medium for recognizing an entity in text, the method including: acquiring a first candidate entity set corresponding to the nominal item from the knowledge graph according to the nominal item included in the target text; determining first characteristic factors respectively corresponding to candidate entities in a first candidate entity set; determining a second candidate entity set from the first candidate entity set according to the first characteristic factor; determining second characteristic factors corresponding to the candidate entities in the second candidate entity set respectively, wherein the second characteristic factors comprise text similarity of the target text and the candidate entities, and the efficiency of determining the second characteristic factors is smaller than that of determining the first characteristic factors; and determining a target entity serving as a recall object of the target text from the second candidate entity set according to the second characteristic factor. The problem of poor real-time performance in the related art is solved.

Description

Method, device and storage medium for identifying entity in text
Technical Field
The present disclosure relates to the field of language processing technologies, and in particular, to a method, an apparatus, and a storage medium for recognizing entities in a text.
Background
At present, CNN (Convolutional Neural Networks) and LSTM (Long Short-Term Memory network) do not perform further local similar feature extraction between query texts and knowledge base texts in entity link application, which may cause text detail feature loss and lower accuracy of entity link, so in order to avoid the problem of text detail feature loss, BERT (Bidirectional Encoder representation from transformer) language models are mostly applied to entities in related technologies.
Although the BERT language model can further extract local similar features from the query text and the knowledge base text, the prediction and inference speed is more than one order of magnitude slower than that of the traditional networks such as CNN and LSTM, and the real-time prediction performance is obviously infeasible for an entity link system which is online and has higher real-time requirement.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, an apparatus, and a storage medium for identifying entities in text.
According to a first aspect of embodiments of the present disclosure, there is provided a method of identifying an entity in text, the method comprising:
acquiring a first candidate entity set corresponding to a designated item from a knowledge graph according to the designated item included in a target text;
determining first characteristic factors respectively corresponding to the candidate entities in the first candidate entity set;
determining a second candidate entity set from the first candidate entity set according to the first characteristic factor;
determining second characteristic factors corresponding to the candidate entities in the second candidate entity set respectively, wherein the second characteristic factors comprise text similarity between the target text and the candidate entities, and the efficiency of determining the second characteristic factors is lower than that of determining the first characteristic factors;
and determining a target entity serving as a recall object of the target text from the second candidate entity set according to the second characteristic factor.
Optionally, the obtaining, according to a term included in the target text, a first candidate entity set corresponding to the term from a knowledge graph includes:
determining a designated item in the target text according to the position identifier in the target text;
and obtaining synonym entities and alias entities corresponding to the named items according to a synonym mapping table and an alias mapping table of the knowledge graph, wherein the first candidate entity set comprises the synonym entities and the alias entities.
Optionally, the first feature factor comprises one or more of an association degree of the named item with the candidate entity, an attribute of the candidate entity, a correspondence of the target text with the candidate entity, and a similarity between a type of the named item and a type of the candidate entity.
Optionally, the determining a first feature factor corresponding to each candidate entity in the first candidate entity set includes:
inputting the target text into a trained DeepType model to obtain the probability that the designated item output by the DeepType model belongs to each type;
and synchronously determining the similarity between the types of the candidate entities and the type of the nominal item according to the probability that the nominal item belongs to each type, wherein the first characteristic factor also comprises the similarity between the type of the nominal item and the type of the candidate entity.
Optionally, the deep type model is obtained by training a first initial neural network model in a mixed precision training mode, where the first initial neural network model is a model based on a fast Transformer architecture.
Optionally, the determining a second candidate entity set from the first candidate entity set according to the first feature factor includes:
for each candidate entity in the first candidate entity set, fusing all first characteristic factors corresponding to the candidate entity to obtain a first fusion score;
and selecting N candidate entities from the first candidate entity set as the second candidate entity set according to the height of the first fusion score, wherein N is a positive integer which is larger than 1 and smaller than M, and M is the total number of the candidate entities in the first candidate entity set.
Optionally, the determining, according to the second feature factor, a target entity that is a recall object of the target text from the second candidate entity set includes:
for each candidate entity in the second candidate entity set, fusing the second characteristic factor and the first characteristic factor to obtain a second fusion score;
and determining a target fusion score with the highest score in all the second fusion scores, and taking a candidate entity corresponding to the target fusion score as the target entity under the condition that the target fusion score is greater than a preset score threshold value.
Optionally, the determining second feature factors respectively corresponding to the candidate entities in the second candidate entity set includes:
inputting the target text and the candidate entity into a trained deep match model to obtain the text similarity between the target text and the candidate entity output by the deep match model.
Optionally, the deep match model is obtained by training a second initial neural network model in a mixed precision training manner, the second initial neural network model is a model based on a fast Transformer architecture, and the deep match model can synchronously calculate text similarity between the candidate entities and the target text.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for identifying an entity in text, the apparatus comprising:
the acquisition module is configured to acquire a designated item included according to a target text and acquire a first candidate entity set corresponding to the designated item from a knowledge graph;
a first determining module configured to determine first feature factors corresponding to the candidate entities in the first candidate entity set respectively;
a second determination module configured to determine a second set of candidate entities from the first set of candidate entities according to the first feature factor;
a third determining module, configured to determine second feature factors corresponding to the candidate entities in the second candidate entity set, where the second feature factors include text similarity between the target text and the candidate entities, and an efficiency of determining the second feature factors is less than an efficiency of determining the first feature factors;
a fourth determination module configured to determine a target entity from the second candidate entity set as a recall object of the target text according to the second feature factor.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for identifying an entity in text, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a first candidate entity set corresponding to a designated item from a knowledge graph according to the designated item included in a target text;
determining first characteristic factors respectively corresponding to the candidate entities in the first candidate entity set;
determining a second candidate entity set from the first candidate entity set according to the first characteristic factor;
determining second characteristic factors corresponding to the candidate entities in the second candidate entity set respectively, wherein the second characteristic factors comprise text similarity between the target text and the candidate entities, and the efficiency of determining the second characteristic factors is lower than that of determining the first characteristic factors;
and determining a target entity serving as a recall object of the target text from the second candidate entity set according to the second characteristic factor.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of identifying entities in text provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: on the premise of considering the similarity between the target text and the candidate entities, the unimportant noise candidate entities in the first candidate entity set are filtered by using the first characteristic factors of the candidate entities in the first candidate set, so that the acquisition time of the subsequent second characteristic factors is shortened while the high recall rate of the candidate entities is ensured, the real-time performance of the whole entity prediction is improved, and the real-time prediction performance of an entity link system which is on-line and has high real-time requirement is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of identifying entities in text in accordance with an exemplary embodiment.
Fig. 2 is a flowchart illustrating step S101 according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating step S103 according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating step S105 according to an exemplary embodiment.
FIG. 5 is a block diagram illustrating an apparatus for identifying entities in text in accordance with an example embodiment.
FIG. 6 is a block diagram illustrating an apparatus for identifying entities in text in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before describing the method, apparatus, and storage medium for identifying entities in text of the present disclosure, an application scenario of various embodiments provided by the present disclosure is first described. Embodiments provided by the present disclosure may be used for entity linking in a knowledge question and answer scenario to determine answer texts corresponding to question texts.
In the related scenario, Entity linking, also called Entity linking, is a process of mapping an Entity designation in a natural language to a correct candidate Entity in a knowledge graph. In order to better understand the text by the machine, the machine often needs to identify the entities in the text, and simultaneously, the entities in the text are in one-to-one correspondence with the entities in the corresponding knowledge graph. The entity linking algorithm needs to link the named item and the text information of the context thereof to the correct mapping entity in the knowledge graph by means of the knowledge graph. The method has the advantages that the specific entities indicated by the named items in the text are accurately understood, and the entities are correctly linked with the existing knowledge graph entities, so that the application effects of intelligent question answering information retrieval, theme analysis and the like can be greatly improved. However, for an online entity link system, how to return results within a limited time on the premise of accurately understanding the text is just a technical problem in the related art.
To this end, the present disclosure provides a method of identifying an entity in a text, and fig. 1 is a flowchart illustrating a method of identifying an entity in a text according to an exemplary embodiment, as shown in fig. 1, including the steps of:
in step S101, a first candidate entity set corresponding to a term is obtained from the knowledge graph according to the term included in the target text.
In step S102, first feature factors corresponding to the candidate entities in the first candidate entity set are determined.
In step S103, a second set of candidate entities is determined from the first set of candidate entities according to the first feature factor.
In step S104, second feature factors corresponding to the candidate entities in the second candidate entity set are determined, where the second feature factors include text similarity between the target text and the candidate entities, and the efficiency of determining the second feature factors is less than the efficiency of determining the first feature factors.
In step S105, a target entity that is a recall object of the target text is determined from the second candidate entity set according to the second feature factor.
The method may for example be applied to an apparatus for identifying entities in text, which may for example be provided as a server for identifying entities in text. In this way, the server for identifying the entity in the text may respond to a request signal of a terminal device (e.g., a smart phone, a speaker device, etc.), acquire and identify a corresponding target text in the request signal, and then determine a target entity corresponding to the target text. And then, the server for identifying the entity in the text can also send the target entity to the terminal equipment, so that the query of the target entity of the target text is completed.
It should be noted that the term is a language segment expressing an entity in a natural text. For example, if "i like a hua zi song" is used as the target text, then "hua zi" in the "i like a hua zi song" is the term. Wherein, the designation term can be obtained by an entity recognition technology or can be given by the upstream.
The first set of candidate entities includes a plurality of candidate entities, and entities associated with the terms are stored in a knowledge-graph, which requires a high recall rate. The entities recalled from the knowledge-graph constitute a first set of candidate entities.
It should be noted that the acquisition time of the first characteristic factor is shorter than the acquisition time of the second characteristic factor, and therefore, the efficiency of determining the second characteristic factor is lower than the efficiency of determining the first characteristic factor. Therefore, each candidate entity in the first candidate entity set is screened by using the first feature factor. The second candidate entity set is determined from the first candidate entity set through the first characteristic factor, so that unimportant noise candidate entities in the first candidate entity set can be filtered, and the efficiency of determining the second characteristic factor is lower than the efficiency of determining the first characteristic factor, so that the efficiency of integrally determining the second characteristic factors of all candidate entities can be improved by filtering the unimportant noise candidate entities in the first candidate entity set, and the purpose of improving the real-time performance is achieved.
Wherein the second feature factor comprises a text similarity of the target text and the candidate entity.
In the method, most of noise candidate entities are filtered by utilizing the first characteristic factors, and the time for subsequently determining the second characteristic factors is shortened, so that the real-time performance of overall prediction is improved, the text similarity between the target text and the candidate entities is also considered when the target entities of the target text recall objects are determined, and the real-time performance of entity link is improved on the premise of not influencing the entity link accuracy.
In one possible embodiment, the common method for determining the first candidate entity set from the knowledge-graph is as follows: the construction rules maintain a vocabulary and recall entities based on edit distance. Specifically, the rules that are built maintain word lists that are commonly known as synonym lists, abbreviation full-name mapping lists, and alias lists. The synonym table can extract synonyms according to the redirection pages in encyclopedia (such as Wikipedia); the abbreviation whole name mapping table can be used for checking initials according to the entity in the library, such as "IBM" is expanded to "International Brotherhood of Magicains"; the alias vocabulary may extract aliases from the aliases and anchor text information in the infobox table. Recalling the given reference item based on the editing distance specifically means calculating the editing distance between the entity and the reference item in the knowledge graph according to the reference item, and recalling when the editing distance is smaller than a preset threshold value.
For example, fig. 2 is a flowchart illustrating step S101 according to an exemplary embodiment, and as shown in fig. 2, the step S101 may include the following steps:
in step S201, a reference item in the target text is determined according to a location identifier in the target text.
In step S202, a synonym entity and an alias entity corresponding to the named item are obtained according to a synonym mapping table and an alias mapping table of the knowledge graph, and the first candidate entity set includes the synonym entity and the alias entity.
It should be noted that the location identifier is an identifier of an upstream delineating reference item. Specifically, the machine can distinguish which word is a reference item by structuring the text as above + place marker + reference item + place marker + below, where a marker is a kind of place identifier. For example, when the target text is constructed as an input structure such that "i feel this [ location identifier ] apple [ location identifier ] is very enjoyable," apple "is a term of reference for the target text.
It is worth noting that the synonym mapping table and the alias mapping table are constructed based on a knowledge graph. The method comprises the steps of obtaining a synonym mapping table, wherein the synonym mapping table comprises synonyms and synonym entities containing the synonyms, storing aliases and alias entities containing the aliases in the alias mapping table, and selecting the synonym entities taking the nomination items as the synonyms and the union of the synonym entities taking the nomination items as the aliases and the alias entities when a first candidate entity set is generated. Illustratively, the synonyms mapping table is { s 1: { e1, e2}, s 2: { e1}, s 3: { e2}, alias mapping table { a 1: { e1}, a 2: { e1, e2}, a 3: { e2} }, when the synonym for term m is designated as s1 and the alias for m is a2, the first set of candidate entities is { e1, e2 }.
It should be noted that, when unique candidate entities are identified by IDs in the synonym mapping table and the alias mapping table, it can be understood that the entities e1 and e2 in the example tables of the synonym mapping table and the alias mapping table may be directly represented by corresponding unique IDs, and accordingly, the corresponding candidate entities may be matched according to the unique IDs.
In a possible implementation, the first feature factor includes one or more of an association degree of the named item with the candidate entity, an attribute of the candidate entity, a correspondence of the target text with the candidate entity, and a similarity between a type of the named item and a type of the candidate entity.
Specifically, the association degree of the named item with the candidate entity can be obtained through marking data, and the association degree is determined according to the link times of the marked named item and the associated entity of the named item. Illustratively, the first candidate entity set as the term m is named as { e1, e2}, m is associated with the entity e1 in the annotation data k1 times, m is associated with the entity e2 in the annotation data k2 times, then the association degree of m with e1 is k1/(k1+ k2), and the association degree of m with e2 is k2/(k1+ k 2).
The candidate entity attributes comprise browsing times of the candidate entity in a knowledge graph, description length of the candidate entity, attribute number of info box of the candidate entity, whether a named item is a synonym of the candidate entity, whether a named item is an alias of the candidate entity, a named item and a character string editing distance of the candidate entity. In some embodiments, each entity in the knowledge-graph includes a plurality of key pair values corresponding to the entity, and each key pair value includes a value corresponding to an entity attribute. For example, the entity attribute may be, for example, the number of times of browsing, and correspondingly, the value corresponding to the attribute may be directly read, so that the number of times of browsing of the candidate entity may be determined.
The correspondence of the target text with the candidate entity may be characterized by two factors. Specifically, the first factor is whether there are synonyms in the target text, where all the terms are the same entity, and if so, the value of the factor is 1, otherwise, the value is 0. For example, the target text is "Sabeining is what is called as who is what is called as? Liu De Hua answered! "in, the term" zhong xing chi "and the term" xing ye "both appear in synonyms of the candidate entity" zhong xing chi — hua language movie actor, director ", and the value of the factor characterizing the consistency of the candidate entity with the target text is 1.
The second factor is whether the keyword in the target text appears in the information of the candidate entity, if so, the factor value is 1, otherwise, the factor value is 0. For example, in the target text "apple is also rescued by arbor", and the keyword "arbor" appears in the description text of the candidate entity "apple _ apple products company … …", the value of the factor characterizing the consistency of the candidate entity with the target text is 1.
The type of the named item and the type of the candidate entity may be, for example, food, organizational structure, people, etc.
In a possible implementation manner, the similarity between the type of the named item and the type of the candidate entity can be calculated through a deep type model. Specifically, the target text is input into a trained DeepType model, and the probabilities of various types of the named items output by the DeepType model are obtained; and synchronously determining the similarity between the types of the candidate entities and the types of the nominal items according to the probability that the nominal items belong to each type.
It is worth noting that the similarity between the types of the candidate entities and the types of the named items is synchronously determined, so that the entity prediction time can be shortened, and the real-time performance of the entity overall prediction can be improved. For example, the simultaneous determination of the similarity between each of the types of the candidate entities and the type of the index may be implemented by amada merging a plurality of requests (requests for determining the similarity between each of the types of the candidate entities and the type of the index) into one http service request.
Wherein the target text needs to be structured such that the deep type model can distinguish which word is the text for which category calculation. For example, the structure may be constructed by a location identifier, specifically a construction structure of "above + location marker + designation item + location marker + below". The above is the text before the start position of the term, and the following is the text after the end position of the term.
For example, to calculate the probability of the type of the named item "apple" in "apple cannot be rescued by arbor, the input structured text is" apple cannot be rescued by [ place marker ] apple [ place marker ] ", the deep type model can take" apple "as the identification object through the place identification marker, and output the probability that the named item belongs to each type in the text.
In addition, the type corresponding to each candidate entity can be obtained from a knowledge graph, and the similarity between the nominal item and the type of the candidate entity is jointly determined according to the type corresponding to each candidate entity and the obtained probability that the nominal item is of each type.
For example, the DeepType model outputs a category probability result for the term "apple" { l 1: 0.8, l 2: 0.1, l 3: 0.1, wherein 11, 12 and 13 are respectively different types, and correspondingly, 0.8, 0.1 and 0.1 are probabilities corresponding to the different types, and if the type of the candidate entity e1 determined according to the knowledge graph is 11, the similarity between the nomination item and the type of the candidate entity is determined to be 0.8 according to the category probability result obtained by the deep type and the type of the candidate entity e 1.
In a possible implementation manner, the deep type model is obtained by training a first initial neural network model in a mixed precision training manner, where the first initial neural network model is a model based on a fast Transformer architecture.
It should be noted that, in the existing deep learning model for physical connection, 32-bit single-precision floating point numbers (FP32) are used for training, and the method for hybrid precision training performs deep learning model training through 16-bit floating point numbers (FP16), thereby reducing the memory required for training the deep learning model. Therefore, the real-time prediction performance can be effectively improved by adopting a mixed precision training mode in the method, and the time for recognizing the entity in the text is shortened.
In addition, in the present disclosure, the first initial neural network is a BERT model, where the Faster Transformer is a Transformer in the BERT model, and the Faster Transformer is an english performance optimization scheme proposed for the Transformer inference, and the use of the Faster Transformer can improve the real-time prediction performance and shorten the time for recognizing an entity in a text.
In one possible implementation, fig. 3 is a flowchart illustrating step S103 according to an exemplary embodiment, and as shown in fig. 3, includes the following steps:
in step S301, for each candidate entity in the first candidate entity set, all the first feature factors corresponding to the candidate entity are fused to obtain a first fusion score.
In step S302, according to the first fusion score, N candidate entities are selected from the first candidate entity set as a second candidate entity set, where N is a positive integer greater than 1 and smaller than M, and M is the total number of candidate entities included in the first candidate entity set.
In the disclosure, for each candidate entity in a first candidate entity set, all first feature factors corresponding to the candidate entity are fused to obtain a first fusion score, N candidate entities are selected as a second candidate entity set by using the rank ordering of the first fusion score, and most noise candidate entities are filtered on the premise of keeping a high recall rate.
The fusion mode of the first characteristic factor can adopt a multilayer perceptron (MLP) method. And forming a logistic regression model by using the MLP model, fusing the plurality of characteristic factors, and then scoring to obtain a first fusion score.
In one possible implementation, fig. 4 is a flowchart illustrating step S105 according to an exemplary embodiment, as shown in fig. 4, including the following steps:
in step S401, for each candidate entity in the second candidate entity set, a second feature factor and a first feature factor are fused to obtain a second fusion score;
in step S402, a target fusion score with the highest score is determined among all the second fusion scores, and when the target fusion score is greater than a preset score threshold, a candidate entity corresponding to the target fusion score is taken as a target entity.
In the disclosure, a first feature factor and a second feature factor are fused, a second fusion score of each candidate entity in a second candidate entity set is comprehensively determined from multiple features, and a candidate entity corresponding to a target fusion score is taken as a target entity when a target fusion score with the highest score is greater than a preset score threshold; and if the target fusion score is less than or equal to the preset score threshold value, the nominal item is considered to have no correct target entity in the knowledge graph.
It should be noted that the preset score threshold may be set according to actual situations, and this embodiment does not limit this.
The fusion mode of the second characteristic factor and the first characteristic factor can adopt a multilayer perceptron (MLP) method. And forming a logistic regression model by using the MLP model, fusing the plurality of characteristic factors, and then scoring to obtain a second fusion score.
In a possible implementation manner, the determining the second feature factor corresponding to each candidate entity in the second candidate entity set in step S104 may specifically include: and inputting the target text and the candidate entity into the trained deep match model to obtain the text similarity between the target text and the candidate entity output by the deep match model.
It is worth noting that the similarity between each candidate entity and the target text needs to be calculated by the deep match model, and the number of the candidate entities acquired in the knowledge graph is hundreds at times, so that a long time is needed, and the real-time performance is poor. Therefore, after the fusion of the first characteristic factors, a candidate entity set is determined again, so that the number of similarity between the candidate entities and the target text, which needs to be calculated by the deep match model, is reduced, the acquisition time of the second characteristic factors is shortened, and the real-time performance of the whole entity prediction is improved.
In the disclosure, when the similarity is calculated by using the deep match model, firstly, in order to distinguish which two words need to be subjected to similarity judgment by the deep match model, the structures of the target text and the candidate entity need to be reconstructed, and then the constructed target text and the candidate entity are spliced and input into the deep match model.
Specifically, the target text and the structure of the candidate entity may be reconstructed by a location identifier, for example, specifically, a structure of "above + location marker + designation item + location marker + below". For example, the text resulting from the construct "i find the apple to eat well" is "i find the [ place marker ] apple [ place marker ]".
The concatenation of the constructed target text and the candidate entity may be constructed, for example, by a location identifier. Illustratively, "i felt this [ place marker 1] apple [ place marker 1 ]" as a constructed target text, "apple is a plant of the genus Maloideae, Rosaceae, whose tree is deciduous tree … …" as a candidate text, and the spliced text is "[ place marker 2] i felt this [ place marker 1] apple [ place marker 1] as a good tasting [ place marker 3] apple is a plant of the genus Maloideae, Rosaceae, Maloideae, whose tree is deciduous tree … …", with this text as an input to the DeepMatch model.
Similar to the deep type model, the deep match model is obtained by training the second initial neural network model in a mixed precision training mode. The second initial neural network model is also based on a Faster Transformer architecture, so that the time for identifying the entity in the text is shortened, and the purpose of improving the real-time performance is achieved.
Further, the deep match model can synchronously calculate the text similarity between a plurality of candidate entities and the target text, and further shorten the time for integrally identifying the entities in the text, so as to achieve the purpose of improving the real-time performance.
Based on the same inventive concept, the present disclosure also provides an apparatus for recognizing an entity in a text, and fig. 5 is a block diagram illustrating an apparatus for recognizing an entity in a text according to an exemplary embodiment. Referring to fig. 5, the apparatus includes an obtaining module 501, a first determining module 502, a second determining module 503, a third determining module 504, and a fourth determining module 505.
The obtaining module 501 is configured to obtain a term included in the target text, and obtain a first candidate entity set corresponding to the term from the knowledge graph.
The first determining module 502 is configured to determine a first feature factor corresponding to each candidate entity in the first candidate entity set.
The second determination module 503 is configured to determine a second set of candidate entities from the first set of candidate entities based on the first feature factor.
The third determining module 504 is configured to determine a second feature factor corresponding to each candidate entity in the second candidate entity set, where the second feature factor includes a text similarity between the target text and the candidate entity, and an efficiency of determining the second feature factor is less than an efficiency of determining the first feature factor.
The fourth determining module 505 is configured to determine a target entity from the second candidate entity set as a recall object of the target text according to the second feature factor.
Optionally, the obtaining module 501 includes:
a named item determination submodule configured to determine a named item in the target text from the location identifier in the target text.
An obtaining sub-module configured to obtain a synonym entity and an alias entity corresponding to the named item according to a synonym mapping table and an alias mapping table of the knowledge graph, where the first candidate entity set includes the synonym entity and the alias entity.
Optionally, the first feature factor comprises one or more of an association degree of the named item with the candidate entity, an attribute of the candidate entity, a correspondence of the target text with the candidate entity, and a similarity between a type of the named item and a type of the candidate entity.
Optionally, the first determining module 502 includes:
and the probability output sub-module is configured to input the target text into a trained DeepType model, and obtain the probability that the designated item output by the DeepType model belongs to each type.
The type similarity determining submodule is configured to synchronously determine the similarity between the types of the candidate entities and the type of the nominal item according to the probability that the nominal item belongs to each type, and the first characteristic factor further comprises the similarity between the type of the nominal item and the type of the candidate entity.
Optionally, the deep type model is obtained by training a first initial neural network model in a mixed precision training mode, where the first initial neural network model is a model based on a fast Transformer architecture.
Optionally, the second determining module 503 includes:
and the first fusion sub-module is configured to fuse all first feature factors corresponding to each candidate entity in the first candidate entity set to obtain a first fusion score.
And the selecting submodule selects N candidate entities from the first candidate entity set as the second candidate entity set according to the height of the first fusion fraction, wherein N is a positive integer which is larger than 1 and smaller than M, and M is the total number of the candidate entities in the first candidate entity set.
Optionally, the fourth determining module 505 comprises;
and the second fusion sub-module is used for fusing the second characteristic factor and the first characteristic factor aiming at each candidate entity in the second candidate entity set to obtain a second fusion score.
And the target entity determining submodule determines a target fusion score with the highest score in all the second fusion scores, and takes a candidate entity corresponding to the target fusion score as the target entity under the condition that the target fusion score is greater than a preset score threshold value.
Optionally, the third determining module 504 is specifically configured to input the target text and the candidate entity into a trained deep match model, so as to obtain a text similarity between the target text and the candidate entity output by the deep match model.
Optionally, the deep match model is obtained by training a second initial neural network model in a mixed precision training manner, the second initial neural network model is a model based on a fast Transformer architecture, and the deep match model can synchronously calculate text similarity between the candidate entities and the target text.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of identifying entities in text provided by the present disclosure.
Based on the same inventive concept, the present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement the steps of the method of identifying an entity in a text provided by the present disclosure.
Based on the same inventive concept, the present disclosure also provides an apparatus for recognizing an entity in a text, and fig. 6 is a block diagram illustrating an apparatus 600 for recognizing an entity in a text according to an exemplary embodiment. For example, the apparatus 600 may be a mobile phone, a computer, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, and a communications component 614.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, data communication, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the above-described method of identifying entities in text. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the apparatus 600. Examples of such data include instructions for any application or method operating on the apparatus 600, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 606 provides power to the various components of device 600. Power components 606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 600.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a speech recognition mode. The received audio signals may further be stored in the memory 604 or transmitted via the communication component 614. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
An input/output (I/O) interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The communication component 614 is configured to facilitate wired or wireless communication between the apparatus 600 and other devices. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 614 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 614 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods of identifying entities in text.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method of identifying entities in text is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned method of identifying an entity in text when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of identifying entities in text, the method comprising:
acquiring a first candidate entity set corresponding to a designated item from a knowledge graph according to the designated item included in a target text;
determining first characteristic factors respectively corresponding to the candidate entities in the first candidate entity set;
determining a second candidate entity set from the first candidate entity set according to the first characteristic factor;
determining second characteristic factors corresponding to the candidate entities in the second candidate entity set respectively, wherein the second characteristic factors comprise text similarity between the target text and the candidate entities, and the efficiency of determining the second characteristic factors is lower than that of determining the first characteristic factors;
and determining a target entity serving as a recall object of the target text from the second candidate entity set according to the second characteristic factor.
2. The method according to claim 1, wherein the obtaining a first candidate entity set corresponding to a reference item from a knowledge graph according to the reference item included in the target text comprises:
determining a designated item in the target text according to the position identifier in the target text;
and obtaining synonym entities and alias entities corresponding to the named items according to a synonym mapping table and an alias mapping table of the knowledge graph, wherein the first candidate entity set comprises the synonym entities and the alias entities.
3. The method of claim 1, wherein the first feature factor comprises one or more of an association of the term with the candidate entity, a property of the candidate entity, a correspondence of the target text with the candidate entity, and a similarity between a type of the term and a type of the candidate entity.
4. The method of claim 1, wherein the determining the first characteristic factor corresponding to each candidate entity in the first set of candidate entities comprises:
inputting the target text into a trained DeepType model to obtain the probability that the designated item output by the DeepType model belongs to each type;
and synchronously determining the similarity between the types of the candidate entities and the type of the nominal item according to the probability that the nominal item belongs to each type, wherein the first characteristic factor also comprises the similarity between the type of the nominal item and the type of the candidate entity.
5. The method of claim 4, wherein the DeepType model is obtained by training a first initial neural network model in a hybrid precision training manner, and the first initial neural network model is a model based on a fast Transformer architecture.
6. The method of claim 1, wherein determining a second set of candidate entities from the first set of candidate entities according to the first eigen factor comprises:
for each candidate entity in the first candidate entity set, fusing all first characteristic factors corresponding to the candidate entity to obtain a first fusion score;
and selecting N candidate entities from the first candidate entity set as the second candidate entity set according to the height of the first fusion score, wherein N is a positive integer which is larger than 1 and smaller than M, and M is the total number of the candidate entities in the first candidate entity set.
7. The method of claim 1, wherein the determining a target entity from the second set of candidate entities as a recall object of the target text according to the second feature factor comprises:
for each candidate entity in the second candidate entity set, fusing the second characteristic factor and the first characteristic factor to obtain a second fusion score;
and determining a target fusion score with the highest score in all the second fusion scores, and taking a candidate entity corresponding to the target fusion score as the target entity under the condition that the target fusion score is greater than a preset score threshold value.
8. The method according to any one of claims 1 to 7, wherein the determining a second feature factor corresponding to each candidate entity in the second set of candidate entities comprises:
inputting the target text and the candidate entity into a trained deep match model to obtain the text similarity between the target text and the candidate entity output by the deep match model.
9. The method of claim 8, wherein the DeepMatch model is obtained by training a second initial neural network model in a hybrid precision training manner, the second initial neural network model is a model based on a fast Transformer architecture, and the DeepMatch model is capable of synchronously calculating text similarity between the candidate entities and the target text.
10. An apparatus for identifying entities in text, the apparatus comprising:
the acquisition module is configured to acquire a designated item included according to a target text and acquire a first candidate entity set corresponding to the designated item from a knowledge graph;
a first determining module configured to determine first feature factors corresponding to the candidate entities in the first candidate entity set respectively;
a second determination module configured to determine a second set of candidate entities from the first set of candidate entities according to the first feature factor;
a third determining module, configured to determine second feature factors corresponding to the candidate entities in the second candidate entity set, where the second feature factors include text similarity between the target text and the candidate entities, and an efficiency of determining the second feature factors is less than an efficiency of determining the first feature factors;
a fourth determination module configured to determine a target entity from the second candidate entity set as a recall object of the target text according to the second feature factor.
11. An apparatus for identifying entities in text, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a first candidate entity set corresponding to a designated item from a knowledge graph according to the designated item included in a target text;
determining first characteristic factors respectively corresponding to the candidate entities in the first candidate entity set;
determining a second candidate entity set from the first candidate entity set according to the first characteristic factor;
determining second characteristic factors corresponding to the candidate entities in the second candidate entity set respectively, wherein the second characteristic factors comprise text similarity between the target text and the candidate entities, and the efficiency of determining the second characteristic factors is lower than that of determining the first characteristic factors;
and determining a target entity serving as a recall object of the target text from the second candidate entity set according to the second characteristic factor.
12. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 9.
CN202011248130.9A 2020-11-10 2020-11-10 Method, device and storage medium for identifying entity in text Pending CN112380865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011248130.9A CN112380865A (en) 2020-11-10 2020-11-10 Method, device and storage medium for identifying entity in text

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011248130.9A CN112380865A (en) 2020-11-10 2020-11-10 Method, device and storage medium for identifying entity in text

Publications (1)

Publication Number Publication Date
CN112380865A true CN112380865A (en) 2021-02-19

Family

ID=74579667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011248130.9A Pending CN112380865A (en) 2020-11-10 2020-11-10 Method, device and storage medium for identifying entity in text

Country Status (1)

Country Link
CN (1) CN112380865A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220899A (en) * 2021-05-10 2021-08-06 上海博亦信息科技有限公司 Intellectual property identity identification method based on academic talent information intellectual map
CN113836874A (en) * 2021-09-16 2021-12-24 北京小米移动软件有限公司 Text error correction method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100169375A1 (en) * 2008-12-29 2010-07-01 Accenture Global Services Gmbh Entity Assessment and Ranking
CN106909655A (en) * 2017-02-27 2017-06-30 中国科学院电子学研究所 Found and link method based on the knowledge mapping entity that production alias is excavated
US20170293640A1 (en) * 2016-03-28 2017-10-12 Cogniac, Corp. Dynamic Adaptation of Feature Identification and Annotation
CN108415902A (en) * 2018-02-10 2018-08-17 合肥工业大学 A kind of name entity link method based on search engine
CN109241294A (en) * 2018-08-29 2019-01-18 国信优易数据有限公司 A kind of entity link method and device
CN109522551A (en) * 2018-11-09 2019-03-26 天津新开心生活科技有限公司 Entity link method, apparatus, storage medium and electronic equipment
CN110795527A (en) * 2019-09-03 2020-02-14 腾讯科技(深圳)有限公司 Candidate entity ordering method, training method and related device
CN110929038A (en) * 2019-10-18 2020-03-27 平安科技(深圳)有限公司 Entity linking method, device, equipment and storage medium based on knowledge graph
CN111339737A (en) * 2020-02-27 2020-06-26 北京声智科技有限公司 Entity linking method, device, equipment and storage medium
CN111368096A (en) * 2020-03-09 2020-07-03 中国平安人寿保险股份有限公司 Knowledge graph-based information analysis method, device, equipment and storage medium
CN111428031A (en) * 2020-03-20 2020-07-17 电子科技大学 Graph model filtering method fusing shallow semantic information
CN111651570A (en) * 2020-05-13 2020-09-11 深圳追一科技有限公司 Text sentence processing method and device, electronic equipment and storage medium
CN111898382A (en) * 2020-06-30 2020-11-06 北京搜狗科技发展有限公司 Named entity recognition method and device for named entity recognition

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100169375A1 (en) * 2008-12-29 2010-07-01 Accenture Global Services Gmbh Entity Assessment and Ranking
US20170293640A1 (en) * 2016-03-28 2017-10-12 Cogniac, Corp. Dynamic Adaptation of Feature Identification and Annotation
CN106909655A (en) * 2017-02-27 2017-06-30 中国科学院电子学研究所 Found and link method based on the knowledge mapping entity that production alias is excavated
CN108415902A (en) * 2018-02-10 2018-08-17 合肥工业大学 A kind of name entity link method based on search engine
CN109241294A (en) * 2018-08-29 2019-01-18 国信优易数据有限公司 A kind of entity link method and device
CN109522551A (en) * 2018-11-09 2019-03-26 天津新开心生活科技有限公司 Entity link method, apparatus, storage medium and electronic equipment
CN110795527A (en) * 2019-09-03 2020-02-14 腾讯科技(深圳)有限公司 Candidate entity ordering method, training method and related device
CN110929038A (en) * 2019-10-18 2020-03-27 平安科技(深圳)有限公司 Entity linking method, device, equipment and storage medium based on knowledge graph
CN111339737A (en) * 2020-02-27 2020-06-26 北京声智科技有限公司 Entity linking method, device, equipment and storage medium
CN111368096A (en) * 2020-03-09 2020-07-03 中国平安人寿保险股份有限公司 Knowledge graph-based information analysis method, device, equipment and storage medium
CN111428031A (en) * 2020-03-20 2020-07-17 电子科技大学 Graph model filtering method fusing shallow semantic information
CN111651570A (en) * 2020-05-13 2020-09-11 深圳追一科技有限公司 Text sentence processing method and device, electronic equipment and storage medium
CN111898382A (en) * 2020-06-30 2020-11-06 北京搜狗科技发展有限公司 Named entity recognition method and device for named entity recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANKITA: "Part-of-speech Tagging and Named Entity Recognition Using Improved Hidden Markov Model and Bloom Filter", 《2018 INTERNATIONAL CONFERENCE ON COMPUTING, POWER AND COMMUNICATION TECHNOLOGIES (GUCON)》, 28 March 2019 (2019-03-28), pages 1072 - 1077 *
庞焜元;唐晋韬;李莎莎;王挺;: "实体消歧中特征文本选取研究", 计算机与数字工程, no. 08, 20 August 2017 (2017-08-20) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220899A (en) * 2021-05-10 2021-08-06 上海博亦信息科技有限公司 Intellectual property identity identification method based on academic talent information intellectual map
CN113836874A (en) * 2021-09-16 2021-12-24 北京小米移动软件有限公司 Text error correction method and device

Similar Documents

Publication Publication Date Title
US20220214775A1 (en) Method for extracting salient dialog usage from live data
CN107818781B (en) Intelligent interaction method, equipment and storage medium
CN107797984B (en) Intelligent interaction method, equipment and storage medium
TW201935273A (en) A statement user intention identification method and device
CN111241237B (en) Intelligent question-answer data processing method and device based on operation and maintenance service
CN107077845B (en) Voice output method and device
CN111967224A (en) Method and device for processing dialog text, electronic equipment and storage medium
CN107330120A (en) Inquire answer method, inquiry answering device and computer-readable recording medium
CN110781305A (en) Text classification method and device based on classification model and model training method
JP7488871B2 (en) Dialogue recommendation method, device, electronic device, storage medium, and computer program
CN111984749B (en) Interest point ordering method and device
CN111708869A (en) Man-machine conversation processing method and device
CN111984784B (en) Person post matching method, device, electronic equipment and storage medium
CN108121736A (en) A kind of descriptor determines the method for building up, device and electronic equipment of model
CN107945802A (en) Voice recognition result processing method and processing device
CN110619050A (en) Intention recognition method and equipment
CN108345608A (en) A kind of searching method, device and equipment
CN112380865A (en) Method, device and storage medium for identifying entity in text
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN112836026B (en) Dialogue-based inquiry method and device
CN114077834A (en) Method, device and storage medium for determining similar texts
CN117932022A (en) Intelligent question-answering method and device, electronic equipment and storage medium
CN114255750B (en) Data set construction and task-based dialogue method, electronic device and storage medium
CN106354762B (en) Service positioning method and device for interactive statements
CN113505293B (en) Information pushing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination