CN114661861A - Text matching method and device, storage medium and terminal - Google Patents
Text matching method and device, storage medium and terminal Download PDFInfo
- Publication number
- CN114661861A CN114661861A CN202210170758.4A CN202210170758A CN114661861A CN 114661861 A CN114661861 A CN 114661861A CN 202210170758 A CN202210170758 A CN 202210170758A CN 114661861 A CN114661861 A CN 114661861A
- Authority
- CN
- China
- Prior art keywords
- text
- graph
- feature vector
- information
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000004364 calculation method Methods 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 173
- 238000012549 training Methods 0.000 claims description 25
- 238000000605 extraction Methods 0.000 claims description 19
- 230000015654 memory Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 18
- 230000004927 fusion Effects 0.000 claims description 18
- 238000010586 diagram Methods 0.000 claims description 17
- 238000007499 fusion processing Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 3
- 235000012020 french fries Nutrition 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000003058 natural language processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Evolutionary Biology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
A text matching method and device, a storage medium and a terminal are provided, and the method comprises the following steps: acquiring a first text; constructing an element graph of the first text; extracting semantic information of the first text; obtaining semantic information and structural information of a second text, wherein the structural information of the second text comprises: element graphs and/or graph embedding feature information of the second texts, wherein the graph embedding feature information is obtained through calculation according to the element graphs; and determining a matching result of the first text and the second text according to the semantic information and the element graph of the first text, the semantic information and the structural information of the second text. By the scheme provided by the invention, the accuracy of text matching can be improved.
Description
Technical Field
The invention relates to the technical field of natural language processing, in particular to a text matching method and device, a storage medium and a terminal.
Background
With the development of artificial intelligence technology, the application of Natural Language Processing (NLP) technology in various fields is becoming more and more extensive, and text matching technology comes along. Text matching is a technique for measuring similarity or relevance between texts. In the prior art, the accuracy of text matching still needs to be improved, and especially when the length of the text is longer, the accuracy of matching is obviously reduced.
Therefore, a text matching method capable of improving matching accuracy is urgently needed.
Disclosure of Invention
The technical problem solved by the invention is how to improve the accuracy of text matching.
In order to solve the above technical problem, an embodiment of the present invention provides a text matching method, where the method includes: acquiring a first text; constructing an element graph of the first text; extracting semantic information of the first text; obtaining semantic information and structural information of a second text, wherein the structural information of the second text comprises: element graphs and/or graph embedding feature information of the second texts, wherein the graph embedding feature information is obtained through calculation according to the element graphs; determining a matching result of the first text and the second text according to the semantic information and the element graph of the first text and the semantic information and the structural information of the second text; the element graph comprises a plurality of nodes, edges between the nodes and weights of the edges, wherein the nodes are elements contained in a text, the edges between the nodes are used for indicating an association relationship between two nodes connected by the edges, and the weights of the edges are used for indicating the association degree between the two nodes connected by the edges.
Optionally, constructing the element graph of the first text includes: constructing an initial element graph of the first text, the initial element graph comprising: the edges between the nodes are used for indicating whether two nodes connected by the edges are positioned in the same sentence or not; determining a sentence set corresponding to each node in the first text, wherein sentences in the sentence set are all associated with the element corresponding to the node; for two nodes connected by an edge, calculating the similarity between sentence sets corresponding to the two nodes to obtain the weight of the edge between the two nodes.
Optionally, determining, from the first text, a sentence set corresponding to each node includes: calculating the similarity between each sentence and each node; and if the similarity between any sentence and each node is smaller than a fifth preset threshold value, rejecting the sentence.
Optionally, before determining the sentence set corresponding to each node from the first text, the method further includes: and carrying out deduplication processing on a plurality of nodes in the initial element graph.
Optionally, the extracting semantic information of the first text includes: inputting the first text into a semantic extraction model to obtain a semantic feature vector output by the semantic extraction model; the semantic extraction model is obtained by training a first preset model by adopting a sample text, the sample text and the first text belong to the same field, and the sample text is provided with a label labeled in advance.
Optionally, the field is a legal document field, and the label includes one or more of: citations of french, case and paperwork types.
Optionally, determining a matching result of the first text and the second text according to the semantic information and the element graph of the first text and the semantic information and the structure information of the second text includes: performing fusion processing on a first element graph and a second element graph to obtain a fused element graph, wherein the first element graph is an element graph of the first text, and the second element graph is an element graph of the second text; calculating a graph embedding feature vector corresponding to the fused element graph by adopting a first graph convolution network, and recording the graph embedding feature vector as a fused graph feature vector; fusing the fusion semantic feature vector and the fusion map feature vector to obtain a fused feature vector, wherein the fusion semantic feature vector is obtained by fusing the semantic feature vector of the first text and the semantic feature vector of the second text; and determining the matching result by adopting a first classifier according to the fused feature vector.
Optionally, the first graph convolution network and the classifier are obtained by training a preset graph convolution network and a preset classifier by using a training sample, where the training sample includes: before determining a matching result of a first text and a second text according to semantic information and a requirement map of the first text, semantic information and a requirement map of the second text, and a pre-labeled real matching result, the method further comprises: performing fusion processing on the element graph of the first sample text and the element graph of the second sample text to obtain a fused sample element graph; calculating a graph embedding feature vector corresponding to the fused sample element graph by adopting the first preset graph convolution network, and recording the graph embedding feature vector as a fused sample graph feature vector; (ii) a Fusing a fused sample semantic feature vector and the fused sample image feature vector to obtain a fused sample feature vector, wherein the fused sample semantic feature vector is obtained by fusing the semantic feature vector of the first sample text and the semantic feature vector of the second sample text; determining a first prediction matching result according to the fused sample feature vector by adopting the first preset classifier; and calculating a first prediction loss according to the first prediction matching result and the real matching result, and updating the first preset graph convolution network and the first preset classifier according to the first prediction loss until a preset training stop condition is met.
Optionally, the fused elemental map includes: a plurality of aligned nodes, wherein edges between the aligned nodes and weights of the edges between the aligned nodes are nodes existing in both the first element graph and the second element graph, each aligned node has feature information, and the fusing of the element graph of the first text and the element graph of the second text includes: determining the plurality of aligned nodes; for every two aligned nodes, judging whether an edge exists between the two aligned nodes in the first element graph or the second element graph, and if so, constructing an edge between the two aligned nodes, wherein the weight of the edge between the aligned nodes is determined according to the weight of the edge between the aligned nodes in the first element graph and/or the weight of the edge between the aligned nodes in the second element graph; for each alignment node, determining characteristic information of the alignment node according to the first sentence set and the second sentence set of the alignment node; wherein the first set of sentences is a corresponding set of sentences in the first element graph, and the second set of sentences is a corresponding set of sentences in the second element graph.
Optionally, determining the feature information of each alignment node according to the first sentence set and the second sentence set of each alignment node includes: for each alignment node, performing fusion processing on semantic information of a first sentence subset and semantic information of a second sentence subset of the alignment node to obtain first characteristic information of the alignment node; for each alignment node, calculating the similarity between a first sentence set and a second sentence set according to the semantic information of the first sentence set and the semantic information of the second sentence set of the alignment node, and taking the similarity as second characteristic information of the alignment node; and determining the characteristic information of each alignment node according to the first characteristic information and the second characteristic information of the alignment node.
Optionally, the determining the matching result of the first text and the second text according to the semantic information of the first text, the element graph, the semantic information of the second text, and the structure information of the second text includes: calculating graph embedding characteristic vectors corresponding to the element graphs of the first text by adopting a second graph convolution network, and marking as first graph embedding characteristic vectors; determining a total feature vector of the first text according to the first graph embedding feature vector and the semantic feature vector of the first text; determining a total feature vector of the second text according to a second graph embedding feature vector and a semantic feature vector of the second text, wherein the second graph embedding feature vector is the graph embedding feature vector of the second text; and determining the matching result according to the total feature vector of the first text and the total feature vector of the second text.
In order to solve the above technical features, an embodiment of the present invention further provides a text matching apparatus, where the apparatus includes: the first acquisition module is used for acquiring a first text; the construction module is used for constructing an element graph of the first text; the semantic extraction module is used for extracting semantic information of the first text; a second obtaining module, configured to obtain semantic information and structural information of a second text, where the structural information of the second text includes: element graphs and/or graph embedding feature information of the second texts, wherein the graph embedding feature information is obtained through calculation according to the element graphs; the matching module is used for determining a matching result of the first text and the second text according to the semantic information and the element graph of the first text and the semantic information and the structural information of the second text; the element graph comprises a plurality of nodes, edges between the nodes and weights of the edges, wherein the nodes are elements contained in the first text, the edges between the nodes are used for indicating an association relationship between two nodes connected by the edges, and the weights of the edges are used for indicating the association degree between the two nodes connected by the edges.
An embodiment of the present invention further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the text matching method are performed.
The embodiment of the present invention further provides a terminal, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor executes the steps of the text matching method when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
in the scheme of the embodiment of the invention, the element graph of the first text is constructed, and the matching result of the first text and the second text is determined according to the semantic information of the first text, the semantic information of the element graph and the second text and the structural information. The element graph comprises a plurality of nodes, edges between the nodes and the weights of the edges, wherein the nodes are elements contained in the text, the edges between the nodes are used for indicating the association relationship between two nodes connected by the edges, and the weights of the edges are used for indicating the association degree between two nodes connected by the edges, so that the element graph can embody the structural characteristics of the text. Furthermore, because the semantic information reflects the semantic features of the text and the element graph can reflect the structural features of the text, when the first text and the second text are matched, the semantic features and the structural features of the text can be fused for text matching, and the accuracy of a matching result can be improved.
Further, in the scheme of this embodiment, an edge between nodes is used to indicate whether two nodes connected by the edge are located in the same sentence, a similarity between sentence sets corresponding to the two nodes is used as a weight of the edge, and the higher the similarity between the sentence sets is, the greater the degree of association between the two nodes is. By adopting the scheme, sentence sets associated with the same element in the text can be in the same sentence set, and then the association relationship among the sentence sets is embodied through the association relationship among the nodes, so that the text can be converted into a element diagram, the structural information of the text can be intuitively embodied through the form of the element diagram, the contents related to different elements in the text can be well integrated, and the accuracy of subsequent matching is favorably improved.
Further, considering that there may be a large amount of standard content text (e.g., standard language) in the long text, if the similarity between any one sentence and each node is less than a fifth preset threshold, the sentence is rejected. By adopting the scheme, invalid interference information in the text can be eliminated, so that the constructed element graph is more accurate.
Furthermore, the first element graph and the second element graph are fused to obtain a fused feature graph, and then a graph embedding feature vector is obtained based on the fused feature graph. Furthermore, semantic feature information of the first text and semantic feature information of the second text are fused to obtain a fused semantic feature vector, and then a matching result is determined based on the graph embedding feature vector and the fused semantic feature vector. According to the scheme, before the prediction result is determined, the structural information and the semantic information of the first text and the second text can be interacted, the first text and the second text can be better understood, and therefore the matching result is more accurate.
Drawings
Fig. 1 is a schematic view of an application scenario of a text matching method in an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a text matching method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating one embodiment of step S202 in FIG. 2;
FIG. 4 is a flowchart illustrating one embodiment of step S205 of FIG. 2;
FIG. 5 is a schematic flow chart diagram illustrating another embodiment of step S205 in FIG. 2;
fig. 6 is a schematic structural diagram of a text matching apparatus according to an embodiment of the present invention.
Detailed Description
As described in the background art, there is a need for a text matching method capable of improving matching accuracy.
In the prior art, an unsupervised algorithm is usually adopted to extract keywords contained in a text, then similarity between the texts is calculated directly based on word vectors of the keywords in the text, and finally a matching result of the text is determined through the similarity. According to the scheme, when the length of the text is long, a large amount of redundancy may exist in the content of the text, so that the characteristics of the text are not prominent, and the actual similarity between the texts cannot be accurately represented by the similarity between the texts obtained according to the word vectors.
In addition, text matching is also generally performed based on a deep learning method in the prior art. Specifically, a training sample is adopted to train the neural network, the training sample is labeled with an actual matching result of a text pair in the training sample in advance, and the neural network has the capability of judging whether the texts are matched or not through training. By adopting the scheme, only texts with sentence lengths can be processed at present, and texts with article lengths are difficult to process. That is, matching of long text is difficult to handle. The reason is that the calculated amount of the model is greatly increased along with the increase of the text length, the accuracy cannot be guaranteed, and the matching process is low in efficiency.
From the above, in the prior art, in an application scenario of long text matching, the accuracy of text matching still needs to be improved.
In order to solve the foregoing technical problem, an embodiment of the present invention provides a text matching method, where in a scheme of the embodiment of the present invention, an element diagram of a first text is constructed, and a matching result of the first text and a matching result of a second text are determined according to semantic information of the first text, the element diagram, and semantic information and structural information of the second text. The element graph comprises a plurality of nodes, edges between the nodes and the weights of the edges, wherein the nodes are elements contained in the text, the edges between the nodes are used for indicating the association relationship between two nodes connected by the edges, and the weights of the edges are used for indicating the association degree between two nodes connected by the edges, so that the element graph can embody the structural characteristics of the text. Furthermore, because the semantic information embodies the semantic features of the text and the element graph can embody the structural features of the text, when the first text and the second text are matched, the semantic features and the structural features of the text can be fused for text matching, and the accuracy of a matching result can be improved.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, embodiments accompanying figures are described in detail below.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a text matching method in an embodiment of the present invention. The following non-limiting description of an application scenario of the embodiment of the present invention is made with reference to fig. 1.
In an application scenario of the embodiment of the present invention, the user terminal 101 may be coupled to the execution terminal 102, and the execution terminal 102 may be coupled to the database 103. The user terminal 101 may be a terminal used by a user, the user may be a user with text matching requirements, the execution terminal 102 may be a terminal for executing the text matching method according to the embodiment of the present invention, and the execution terminal 102 may be various existing terminal devices with data receiving and data processing capabilities, for example, a computer, a server, and the like.
Further, the database 103 may store a plurality of second texts, and may also store semantic information and structural information of each second text. For specific meanings of the semantic information and the structural information and the steps of acquiring the semantic information and the structural information, reference may be made to the following description, which is not repeated herein.
The first text and the second text may be both long texts, and the long texts may be texts with the number of characters greater than a first preset threshold, where the first preset threshold may be 100, but is not limited thereto. In other words, "text matching" in the embodiment of the present invention may refer to matching between a long text and a long text.
In the embodiment of the present invention, the first text and the plurality of second texts may belong to the same field (for example, a technical field, an application field, and the like). In one non-limiting example, the first text and the second text are both legal documents, i.e., the field is the field of legal documents. The legal document may refer to a text related to legal contents, for example, it may be a normative legal document or a non-normative legal document. More specifically, the legal documents in the present embodiment may be court trial notes, referee documents, and the like, but are not limited thereto.
In other embodiments, the first text and the second text may also be news documents, i.e. the domain is a news document domain.
In a specific implementation, after receiving the first text sent by the user terminal 101, the execution terminal 102 may read semantic information and structural information of a plurality of second texts from the database 103, and for the first text and any one of the second texts, the solution of the embodiment of the present invention may be executed to determine a matching result between the first text and the second text, so as to obtain one or more second texts matched with the first text. Wherein the matching result can be used to indicate whether the first text and the second text match. More specifically, the text matching method in the present embodiment may be applied to the field of legal documents, and by executing the text matching method in the present embodiment, a plurality of second texts that match the first text may be accurately determined.
Further, one or more second texts matching the first texts may also be sent to the user terminal 101.
It should be noted that fig. 1 only exemplarily shows an application scenario of the embodiment of the present invention, and in other embodiments, the first text and the second text may both be stored locally at the execution terminal 102, or may both be stored in the database 103, but are not limited thereto.
Referring to fig. 2, fig. 2 is a schematic flowchart of a text matching method according to an embodiment of the present invention. The method may be executed by a terminal, which may be any of various existing terminal devices with data receiving and data processing capabilities, for example, the terminal may be the execution terminal 102 shown in fig. 1, and more specifically, may be a mobile phone, a computer, an internet of things device, a server, and the like, but is not limited thereto. By the scheme of the embodiment of the invention, the matching result between the first text and the second text can be accurately determined according to the semantic information and the structural information of the first text and the semantic information and the structural information of the second text. The text matching method shown in fig. 1 may include the steps of:
step S201: acquiring a first text;
step S202: constructing an element graph of the first text;
step S203: extracting semantic information of the first text;
step S204: acquiring semantic information and structural information of a second text;
step S205: and determining a matching result of the first text and the second text according to the semantic information and the element graph of the first text and the semantic information and the structural information of the second text.
It is understood that, in a specific implementation, the method may be implemented by a software program running in a processor integrated inside a chip or a chip module; alternatively, the method can be implemented in hardware or a combination of hardware and software.
It should be noted that the scheme of the embodiment of the present invention is not necessarily executed in the order of step S201 to step S205, and the execution order of step S201 to step S205 is not limited in the present embodiment. More specifically, the present embodiment does not limit the execution order of step S202, step S203, and step S204.
In the specific implementation of step S201, the first text may be acquired from an external user terminal in real time, may be pre-stored in a memory of the terminal executing the embodiment of the present invention, or may be read from a database coupled to the terminal executing the embodiment of the present invention, which is not limited in this embodiment. More specifically, the first text may be uploaded by the user through the user terminal to the terminal performing the embodiment of the present invention.
It should be noted that, in this embodiment, a Format of the first Text is not limited, and the first Text may be in a Document (DOC) Format, may also be in a Portable Document Format (PDF) Format, and may also be in a Text (TXT) Format, but is not limited thereto.
In one non-limiting example, the first text can be a legal document, such as, but not limited to, a prosecution, a court trial transcript, a query transcript, a referee document, and the like.
In a specific implementation of step S202, an element graph of the first text may be constructed, where the element graph includes a plurality of nodes, edges between the nodes, and weights of the edges.
Specifically, the nodes in the element graph may be elements contained in the text, and the nodes and the elements in the text are in one-to-one correspondence. Therefore, before constructing the element graph, elements included in the text need to be extracted, and the elements may include: the first element result and/or the second element result, wherein the first element result can be an element extracted from the text based on a supervised algorithm, and the second element result can be an element extracted from the text based on an unsupervised algorithm.
More specifically, extracting elements based on a supervised algorithm may include: extracting content corresponding to the entity tag from the text according to the predefined entity tag to obtain a first factor result, where the first factor result may include: and the entity tag and the corresponding content of the entity tag in the text.
Wherein the predefined entity tag may be determined according to a domain to which the text belongs. Taking the field of legal documents as an example, the entity tags may include one or more of the following: the loan amount, the loan interest, the time of the proposal, the proposal tool, etc., but not limited thereto.
Further, extracting elements based on an unsupervised algorithm may include: the text may be input to the keyword extraction model to obtain a second elemental result output by the keyword extraction model, and the second elemental result may be a keyword. The keyword extraction model may be a model for extracting keywords based on an unsupervised algorithm, and the unsupervised algorithm may be a Term Frequency-Inverse Document Frequency (TF-IDF) algorithm, a Text rank algorithm, or the like, but is not limited thereto.
In one non-limiting example, both supervised and unsupervised algorithms may be employed to extract elements. By adopting the scheme, the problem that the predefined entity labels are not comprehensive enough can be solved, and elements in the text can be extracted more comprehensively, so that the element graph can more accurately describe the structural information of the text.
Further, in the solution of the embodiment of the present invention, the nodes in the element graph may be independent from each other. Specifically, independent of each other may mean that there may be no logical containment relationship, semantic reference relationship, synonym relationship, or the like between nodes, but is not limited thereto.
Further, edges between nodes are used to indicate an association relationship between nodes connected by the edges, in other words, the edges may be used to indicate an association relationship between elements corresponding to the nodes, which may be an association relationship of the elements on the structure of the text.
In a specific example, if any two nodes are located in the same sentence, it can be determined that there is an association between the two nodes. More specifically, two nodes only need to appear in any sentence in the text at the same time, and the association relationship between the two nodes can be determined.
In another specific example, for any two nodes, if the number of sentences containing the two nodes simultaneously in the text is greater than or equal to a second preset threshold, it may be determined that an association relationship exists between the two nodes. Wherein the second preset threshold may be preset, for example, may be 2. It should be noted that the association relationship between the nodes may also be determined in other various appropriate manners, which is not limited in this embodiment.
Further, the weight of the edge is used to indicate the degree of association between two nodes that the edge connects, and more particularly, the weight of the edge may be used to indicate the degree to which two nodes are associated in the structure of the text. It will be appreciated that the greater the weight of an edge, the greater the degree to which two nodes connected by the edge are structurally related. Further, the greater the degree of association between two nodes in the structure, the greater the probability that the two nodes will appear in the same sentence in other texts in the same domain.
In a specific implementation, the first text may be preprocessed before performing step S202.
Specifically, preprocessing the first text may include: the first text is processed using a predefined Regular Expression (Regular Expression) and/or words of a preset property in the first text are replaced with standardized characters.
The following is a non-limiting description of the pretreatment process, using legal texts as examples.
On the one hand, the legal documents have high format standardization, for example, the names of the referee documents need to contain the names of the parties, the names of cases, the types of the documents and the like, the referee documents have high format standardization, and the contents of different referee documents need to contain modules of information of the parties, description of the cases, legal bases, the referee results and the like. Therefore, the first text is processed by adopting the predefined regular expression, and the processed first text can clearly and intuitively represent the positions of the modules in the first text, such as the position of case description in the first text, the position of legal basis in the first text, and the like.
In another aspect, the preset attributes may include: party name, review level, and document type. In order to prevent interference of the specific contents of the party name, the review grade and the document type on subsequent matching, the contents of the preset attributes can be replaced by standardized character strings in the preprocessing stage. Wherein different preset attributes may correspond to different standardized character strings. For example, the "civil judgment book for dispute after repurchase of a certain stock company and a certain pledge Type securities repurchase" is replaced by "< Norm _ Name > and < Norm _ Name > pledge Type securities repurchase dispute < Norm _ Level > < Norm _ Type >".
Further, an element graph of the first text may be constructed based on the preprocessed first text.
Referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of step S202. Step S302 shown in fig. 3 may include the steps of:
step S301: constructing an initial element graph of the first text;
step S302: determining a sentence set corresponding to each node in the first text;
step S303: for two nodes connected by an edge, calculating the similarity between sentence sets corresponding to the two nodes to obtain the weight of the edge between the two nodes.
In a specific implementation of step S301, a plurality of elements included in the first text may be extracted. For the specific content of the extracted elements, reference may be made to the above description, and the details are not repeated here.
Further, for every two nodes, whether the two nodes are located in the same sentence or not can be judged, and if yes, an edge is constructed between the two nodes. Thus, an initial element map can be obtained.
In particular implementations, deduplication may also be performed on multiple nodes in the initial element graph. Wherein the deduplication processing may include one or more of:
on one hand, the method can carry out reference resolution and synonym analysis on a plurality of nodes to judge whether semantic duplication exists between the nodes, and can remove any one of any two nodes with the semantic duplication. The specific method for performing the reference resolution and the synonym analysis on the plurality of nodes may be various existing appropriate methods, and this embodiment does not limit this.
On the other hand, a graph algorithm can be adopted to cluster the plurality of nodes in the preliminary element graph so as to obtain a plurality of clustered nodes. The graph algorithm may be, but is not limited to, a Community discovery (Community Detection) algorithm, and the like.
In a specific implementation of step S302, a sentence set corresponding to each node may be determined, where sentences in the sentence set corresponding to each node are associated with the node.
Specifically, for each node, a similarity between each sentence in the first text and the node may be calculated, and if the similarity between any one sentence and the node is greater than a fourth preset threshold, it may be determined that the sentence is associated with the node, so that a set of sentences corresponding to the node may be obtained. In other words, the similarity between the sentence in the sentence set corresponding to each node and the node is greater than the fourth preset threshold.
In a specific implementation, considering that there may be a large number of standard words in a long text, if the similarity between any one sentence and each node is less than a fifth preset threshold, the sentence is rejected. And the fifth preset threshold is smaller than the fourth preset threshold. In other words, if the similarity between any one sentence and each node is low, it may be determined that the sentence is a standard jargon in text, and the sentence may not be assigned to the sentence subset of any one node. By adopting the scheme, invalid interference information in the text can be eliminated, so that the constructed element graph is more accurate.
It should be further noted that the sentences in the Sentence sets may be characterized in the form of texts or in the form of Sentence Embedding (sequence Embedding), which is not limited in this embodiment.
In a specific implementation of step S303, for two nodes connected by an edge, a similarity between sentence sets corresponding to the two nodes may be calculated to obtain a weight of the edge between the two nodes.
Specifically, calculating the similarity between the two sentence sets may include: for each sentence subset, a plurality of sentences in the sentence set can be spliced to obtain a spliced sentence; the similarity between the two stitched sentences may then be calculated to obtain the similarity between the two sets of sentences. More specifically, in the splicing process, the sentences may be spliced according to the sequence of the sentences in the text.
It should be noted that the method for calculating the similarity between sentences in the embodiment of the present invention may be various existing suitable methods, for example, the cosine similarity between sentences may be calculated, and the method may also be calculated by using a Term Frequency-Inverse text Frequency index (TF-IDF) algorithm, may also be calculated by using a BM25 algorithm, may also be calculated by using a Bi-directional Long Short-Term Memory (Bi-LSTM) algorithm, and the like, but is not limited thereto.
Therefore, the element diagram of the first text can be obtained, and the element diagram of the first text can embody the elements included in the first text and the structural association relationship and the degree of association between the elements.
With continued reference to fig. 2, in a specific implementation of step S203, semantic information of the first text may be obtained. The semantic information of the first text may refer to global semantic information of the first text, and more specifically, the semantic information may be a semantic feature vector, and the semantic feature vector of the first text may be a feature vector capable of characterizing global semantics of the first text.
In specific implementation, the first text may be input to the semantic extraction model to obtain a semantic feature vector of the first text output by the semantic extraction model; the semantic extraction model is obtained by training a first preset model by adopting a sample text, the sample text and the first text belong to the same field, and the sample text is provided with a label labeled in advance. The first predetermined model may be a bert (bidirectional Encoder Representation from transforms) model, but is not limited thereto.
In one specific example, where the domain is a legal text domain, the pre-labeled tags may include one or more of the following: citations of french fries, case and paperwork types.
Specifically, the citation law in the legal text usually contains a large number of elements, which are key indexes for reflecting whether the texts are similar, and the citation law is used as a label, so that the expression capability of the trained semantic extraction model on the semantic features of the legal text can be improved.
Furthermore, in the legal document field, the case bases are numerous and the case bases have larger difference between different legal texts, so that the case bases are used as labels, the semantic extraction model obtained by training can be suitable for extracting semantic information of the legal texts of various case bases, and the universality of the model is better.
Furthermore, the document type can be used as a label to construct a multi-task learning model, so that the model can better understand the content of the law-law text, and the performance of the model is improved.
Before the first preset model is trained by adopting the sample text, the sample text can be preprocessed to obtain a preprocessed sample text, and then the preprocessed sample text is adopted to train the first preset model. By adopting the scheme, the influence of irrelevant redundant information on the model performance can be avoided, and the semantics extracted by the trained semantic extraction model can be more accurate. For specific content of preprocessing the sample text, reference may be made to the above description related to preprocessing the first text, and details are not described herein again.
In a specific implementation of step S204, semantic information and structural information of the second text may be acquired. Specifically, the semantic information of the second text may be global semantic information of the second text, for example, may be a semantic feature vector of the second text, and the semantic feature vector of the second text may be a feature vector capable of characterizing global semantics of the second text. The semantic information of the second text may also be calculated by the above semantic extraction model.
Further, the structure information of the second text may include: the element graph of the second text and/or the graph of the second text embeds the feature information.
Specifically, the element diagram of the second text may be constructed based on the above method, and is not described herein again. The map-embedded feature information of the second text may be calculated from the element map of the second text. More specifically, the element Graph of the second text may be input to a Graph Convolution Network (GCN), which may be the second Graph convolution Network in the following text, to obtain Graph-embedded feature information output by the Graph convolution Network.
In a specific implementation, obtaining semantic information and structural information of the second text may include: and reading the semantic information and the structural information of the second text. Specifically, the semantic information and the structural information of the second text may be pre-stored in a memory of the terminal according to the embodiment of the present invention, or may be pre-stored in a database coupled to the terminal that performs the embodiment of the present invention.
In other embodiments, obtaining semantic information and structural information of the second text may include: acquiring a second text; and constructing an element graph of the second text, and extracting semantic information of the second text. The graph embedding characteristic information of the second text can be further determined according to the element graph of the second text. For more details on obtaining the semantic information and the structural information of the second text, reference may be made to the above description, and further details are not described here.
In a specific implementation of step S205, a matching result between the first text and the second text may be determined, and the matching result may be: a "match" or a "no match".
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of step S205 in fig. 1. Step S105 shown in fig. 4 may include the steps of:
step S401: fusing the first element diagram and the second element diagram to obtain a fused element diagram;
step S402: calculating a graph embedding feature vector corresponding to the fused element graph by adopting a first graph convolution network, and recording the graph embedding feature vector as a fused graph feature vector;
step S403: fusing the fusion semantic feature vector and the fusion map feature vector to obtain a fused feature vector;
step S404: and determining the matching result by adopting a first classifier according to the fused feature vector.
In the implementation of step S401, the first element map is an element map of the first text, and the second element map is an element map of the second text. The fused elemental map may include: a plurality of alignment nodes, edges between the alignment nodes, weights of the edges between the alignment nodes, and characteristic information of each alignment node. The following describes a fusion process of the first element map and the second element map in detail.
First, a plurality of aligned nodes may be determined from nodes in the first element graph and nodes in the second element graph, where an aligned node is a node that is present in both the first element graph and the second element graph. In other words, an aligned node may be a node that exists in both the first and second element graphs.
Further, for every two aligned nodes, it may be determined whether an edge exists between the two aligned nodes in the first element graph or the second element graph, and if so, it is determined that an edge exists between the two aligned nodes in the fused element graph. In other words, if an edge exists between the two aligned nodes in the first element graph or an edge exists between the two aligned nodes in the second element graph, an edge also exists between the two nodes in the element graph after the merging process. And if no edge exists between the two aligned nodes in the first element graph and the second element graph, no edge exists between the two aligned nodes in the element graph after the fusion processing.
Further, weights of edges between aligned nodes may be determined, wherein the weights of the edges between aligned nodes may be determined according to the weights of the edges between aligned nodes in the first element graph and/or the weights in the second element graph.
Specifically, for any two aligned nodes, if an edge exists only in the first element graph or only in the second element graph, the weight of the edge in the first element graph or the second element graph can be directly used as the weight of the edge between the two aligned nodes in the fused element graph.
Further, if there is an edge in both the first element graph and the second element graph, a weighted average of the weights of the edge in the first element graph and the weights of the edge in the second element graph may be calculated, and the obtained weighted average may be used as the weight of the edge between the two aligned nodes in the merged element graph.
Further, characteristic information of each alignment node may be determined. Specifically, for each alignment node, feature information for the alignment node is determined based on a first set of sentences and a second set of sentences of the alignment node. The first sentence set is a sentence set corresponding to the alignment node in the first element graph, and the second sentence set is a sentence set corresponding to the alignment node in the second element graph.
More specifically, the feature information of each alignment node may include first feature information and second feature information of the node, the first feature information may be obtained by performing a fusion process according to semantic information of the first sentence set and semantic information of the second sentence set, and the second feature information may be used to indicate a similarity between the first sentence set and the second sentence set.
In a specific implementation, the sentence vectors of the sentences in the first sentence set and the sentence vectors of the sentences in the second sentence set may be weighted-averaged to obtain the first feature information.
Further, a similarity between the first sentence set and the second sentence set may be calculated based on the semantic information of the first sentence set and the semantic information of the second sentence set, and may be used as the second feature information of the alignment node. For the calculation method of the similarity between the sentence subsets, reference may be made to the above description, and details are not repeated here.
Further, the first characteristic information and the second characteristic information of the alignment node may be spliced to obtain the characteristic information of the alignment node. In the spliced feature information, the first feature information may be before and the second feature information may be after.
In other embodiments, the first feature information may also be directly used as the feature information of the aligned node, which is not limited in this embodiment.
From the above, the fused element graph can be obtained, and it can be understood that the fused element graph may include both the structure information of the first text and the structure information of the second text.
Before performing step S402, a first preset graph convolution network and a first preset classifier may be trained by using a first training sample to obtain the first graph convolution network in step S402 and the first classifier in step S404.
Specifically, the first training sample may include: the semantic information of the first sample text, the semantic information of the element graph and the second sample text, the element graph and a pre-labeled first label, wherein the first label is used for indicating a real matching result of the first sample text and the second sample text. The true match result may be a match or a mismatch. It should be noted that the first sample text, the second sample text, and the first text belong to the same field.
More specifically, the semantic information of the first sample text may be a semantic feature vector of the first sample text, which can characterize the global semantics of the first sample text; likewise, the semantic information of the second sample text may be a semantic vector of the second sample text, capable of characterizing the global semantics of the second sample text.
In a specific implementation, a fusion process may be performed on the element graph of the first sample text and the element graph of the second sample text to obtain a fused sample element graph. For a specific process of performing the fusion processing on the element graph of the first sample text and the element graph of the second sample text, reference may be made to the related description of step S401, and details are not repeated here.
Further, a first preset graph convolution network can be adopted to calculate graph embedding feature vectors corresponding to the fused sample element graphs, and the graph embedding feature vectors are recorded as fused sample graph feature vectors. The first predetermined graph convolution network may be any suitable graph convolution network, and the specific structure of the first predetermined graph convolution network is not limited in this embodiment.
Further, the semantic feature vector of the first sample text and the semantic feature vector of the second sample text may be subjected to fusion processing to obtain a fusion sample semantic feature vector. In specific implementation, the semantic feature vector of the first sample text and the semantic feature vector of the second sample text may be spliced to obtain a fusion sample semantic feature vector.
Further, the fusion sample semantic feature vector and the fusion sample image feature vector may be subjected to fusion processing to obtain a fused sample feature vector. In specific implementation, the semantic feature vector of the fused sample and the feature vector of the fused sample image may be spliced to obtain the feature vector of the fused sample. More specifically, when the fused sample semantic feature vector and the fused sample map feature vector are spliced, the fused sample semantic feature vector may be preceded and the fused sample map feature vector may be followed, but the method is not limited thereto.
Further, a first preset classifier may be employed to determine a first predicted matching result according to the fused sample feature vector. The first preset Classifier may be any suitable Classifier (Classifier) in the prior art, and the present embodiment does not limit this.
Further, the first predicted loss may be calculated according to the first predicted matching result and the first label, and more specifically, the first predicted loss may be determined according to the first predicted matching result, the true matching result, and a preset loss function, wherein the loss function may be a Softmax function, but is not limited thereto.
Further, the first preset graph convolution network and the first preset classifier may be updated according to the first predicted loss until a preset training stop condition is satisfied. Wherein the preset training stop condition may include: the number of updates reaches a sixth preset threshold, and/or the first predicted loss is smaller than a seventh preset threshold, but not limited thereto.
Thus, the first graph convolution network and the first classifier may be trained. Further, step S402 may be executed, that is, the fused element map obtained in step S401 may be input to the first graph convolution network, and the fused feature vector output by the first graph convolution network may be obtained. The fusion graph feature vector is obtained by calculating the first graph convolution network according to the fused element graph.
In a specific implementation of step S403, a fusion process may be performed on the semantic feature vector of the first text and the semantic feature vector of the second text to obtain a fusion semantic feature vector. In specific implementation, the semantic feature vector of the first text and the semantic feature vector of the second text may be spliced to obtain a fused semantic feature vector. In the splicing process, the front-back sequence between the semantic feature vector of the first text and the semantic feature vector of the second text is not limited in this embodiment.
Further, the fusion semantic feature vector and the fusion graph feature vector can be subjected to fusion processing to obtain a fused feature vector. In specific implementation, the fusion semantic feature vector and the fusion graph feature vector can be spliced to obtain a fused feature vector.
In a specific implementation of step S404, the fused feature vector may be input to the first classifier, so that the first classifier determines a matching result of the first text and the second text according to the fused feature vector calculation.
From the above, a matching result of the first text and the second text can be obtained. By adopting the scheme, the fused feature vector input into the first classifier is obtained by fusing the fused graph feature vector and the fused semantic feature vector, wherein the fused graph feature vector is obtained by calculating the fused element graph by the first graph convolution network.
Referring to fig. 5, fig. 5 is a schematic flow chart of another specific implementation of step S205 in fig. 2. Step S205 shown in fig. 5 may include the steps of:
step S501: calculating a graph embedding characteristic vector corresponding to the element graph of the first text by adopting a second graph convolution network, and recording the graph embedding characteristic vector as a first graph embedding characteristic vector;
step S502: determining a total feature vector of the first text according to the first graph embedding feature vector and the semantic feature vector of the first text;
step S503: determining a total feature vector of the second text according to a second graph embedding feature vector and a semantic feature vector of the second text;
step S504: and determining the matching result by adopting a second classifier according to the total feature vector of the first text and the total feature vector of the second text.
Before step S501 is executed, a second training sample may be used to train a second preset graph convolution network and a second preset classifier, so as to obtain the second graph convolution network in step S501 and the second classifier in step S504.
Specifically, the second training sample may include: semantic information of the third sample text, semantic information of the element graph and the fourth sample text, the element graph, and a pre-labeled second label, wherein the second label is used for indicating a real matching result of the third sample text and the fourth sample text.
For more details about the second training sample, reference may be made to the above description about the first training sample, which is not repeated herein. For specific contents of the second predetermined graph convolution network and the second predetermined classifier, reference may be made to the above description related to the first predetermined graph convolution network and the second predetermined classifier, which is not described herein again.
In the training process, the number of the second preset graph convolution networks is 2, specifically, the element graph of the third sample text may be input to one of the two second preset graph convolution networks to obtain an output graph embedding feature vector, which is recorded as a third sample graph feature vector, and the element graph of the fourth sample text may be input to the other of the two second preset graph convolution networks to obtain an output graph embedding feature vector, which is recorded as a fourth sample graph feature vector.
Further, the feature vector of the third sample image and the semantic feature vector of the third sample text may be subjected to fusion processing to obtain a feature vector of the third sample; and the fourth sample image feature vector and the semantic feature vector of the fourth sample text can be fused to obtain a fourth sample feature vector.
Further, a second preset classifier can be adopted to calculate a second prediction matching result according to the third sample feature vector and the fourth sample feature vector.
Further, a second predicted loss may be calculated based on the second label and the second predicted match.
Further, the second pre-set graph convolutional network and the second pre-set classifier may be updated according to the second predicted loss until a pre-set stop condition is satisfied.
From the above, a second graph convolution network and a second classifier may be obtained.
In the specific implementation of step S501, a graph embedding feature vector corresponding to the element graph of the first text may be calculated by using the second graph convolution network and recorded as the first graph embedding feature vector.
In the specific implementation of step S502, a first graph embedding feature vector and a semantic feature vector of a first text are subjected to fusion processing to obtain a total feature vector of the first text;
in the specific implementation of step S503, a second graph embedding feature vector may be obtained first, where the second graph embedding feature vector may also be calculated by a second graph convolution network.
Further, the feature vector and the semantic feature vector of the second text may be embedded for the second graph, determining a total feature vector of the second text.
In the implementation of step S504, the total feature vector of the first text and the total feature vector of the second text may be input to the second classifier together to obtain the matching result output by the second classifier.
With continued reference to fig. 1, in other embodiments, the matching result of the first text and the second text may also be determined in other suitable manners according to the semantic information of the first text, the semantic information of the element graph and the second text, and the structure information.
For example, a first matching result may be calculated from semantic information of the first text and semantic information of the second text; then, a second matching result can be calculated according to the element diagram of the first text and the structure information of the second text, and then a final matching result is determined according to the first matching result and the second matching result. Wherein calculating the second matching result according to the element diagram of the first text and the structure information of the second text may include: and calculating a first graph embedding characteristic vector according to the element graph of the first text, and then calculating a second matching result according to the first graph embedding characteristic vector and the second graph embedding characteristic vector.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a text matching apparatus in an embodiment of the present invention, and the apparatus shown in fig. 6 may include:
a first obtaining module 61, configured to obtain a first text;
a construction module 62, configured to construct an element map of the first text;
a semantic extraction module 63, configured to extract semantic information of the first text;
a second obtaining module 64, configured to obtain semantic information and structural information of a second text, where the structural information of the second text includes: element graphs and/or graph embedding feature information of the second texts, wherein the graph embedding feature information is obtained through calculation according to the element graphs;
a matching module 65, configured to determine a matching result between the first text and the second text according to the semantic information and the element map of the first text and the semantic information and the structure information of the second text;
the element graph comprises a plurality of nodes, edges between the nodes and weights of the edges, wherein the nodes are elements contained in the first text, the edges between the nodes are used for indicating an association relationship between two nodes connected by the edges, and the weights of the edges are used for indicating the association degree between the two nodes connected by the edges.
For more contents such as the working principle, the working method, and the beneficial effects of the text matching apparatus in the embodiment of the present invention, reference may be made to the above description related to the text matching method, and details are not repeated here.
An embodiment of the present invention further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the text matching method are performed. The storage medium may include ROM, RAM, magnetic or optical disks, etc. The storage medium may further include a non-volatile memory (non-volatile) or a non-transitory memory (non-transient), and the like.
The embodiment of the present invention further provides a terminal, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor executes the steps of the text matching method when running the computer program. The terminal includes, but is not limited to, a mobile phone, a computer, a tablet computer and other terminal devices.
It should be understood that, in the embodiment of the present application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (enhanced SDRAM), SDRAM (SLDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM)
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer program may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative; for example, the division of the unit is only a logic function division, and there may be another division manner in actual implementation; for example, various elements or components may be combined or may be integrated in another system or some features may be omitted, or not implemented. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. For example, for each device or product applied to or integrated into a chip, each module/unit included in the device or product may be implemented by hardware such as a circuit, or at least a part of the module/unit may be implemented by a software program running on a processor integrated within the chip, and the rest (if any) part of the module/unit may be implemented by hardware such as a circuit; for each device or product applied to or integrated with the chip module, each module/unit included in the device or product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components of the chip module, or at least some of the modules/units may be implemented by using a software program running on a processor integrated within the chip module, and the rest (if any) of the modules/units may be implemented by using hardware such as a circuit; for each device and product applied to or integrated in the terminal, each module/unit included in the device and product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program running on a processor integrated in the terminal, and the rest (if any) part of the modules/units may be implemented by using hardware such as a circuit.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more. The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for the purpose of illustrating and differentiating the description objects, and do not represent any particular limitation to the number of devices in the embodiments of the present application, and cannot constitute any limitation to the embodiments of the present application. Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected by one skilled in the art without departing from the spirit and scope of the invention, as defined in the appended claims.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (14)
1. A method of text matching, the method comprising:
acquiring a first text;
constructing an element graph of the first text;
extracting semantic information of the first text;
obtaining semantic information and structural information of a second text, wherein the structural information of the second text comprises: element graphs and/or graph embedding feature information of the second texts, wherein the graph embedding feature information is obtained through calculation according to the element graphs;
determining a matching result of the first text and the second text according to the semantic information and the element graph of the first text and the semantic information and the structural information of the second text;
the element graph comprises a plurality of nodes, edges between the nodes and weights of the edges, wherein the nodes are elements contained in a text, the edges between the nodes are used for indicating an association relationship between two nodes connected by the edges, and the weights of the edges are used for indicating the association degree between the two nodes connected by the edges.
2. The text matching method of claim 1, wherein constructing an element graph of the first text comprises:
constructing an initial element graph of the first text, the initial element graph comprising: the edges between the nodes are used for indicating whether two nodes connected by the edges are positioned in the same sentence or not;
determining a corresponding sentence set of each node in the first text, wherein sentences in the sentence sets are all associated with elements corresponding to the node;
for two nodes connected by an edge, calculating the similarity between sentence sets corresponding to the two nodes to obtain the weight of the edge between the two nodes.
3. The text matching method of claim 2, wherein determining a sentence set corresponding to each node from the first text comprises:
calculating the similarity between each sentence and each node;
and if the similarity between any sentence and each node is smaller than a fifth preset threshold value, rejecting the sentence.
4. The text matching method according to claim 2, wherein before determining the sentence set corresponding to each node from the first text, the method further comprises:
and carrying out deduplication processing on a plurality of nodes in the initial element graph.
5. The text matching method of claim 1, wherein extracting semantic information of the first text comprises:
inputting the first text into a semantic extraction model to obtain a semantic feature vector output by the semantic extraction model;
the semantic extraction model is obtained by training a first preset model by adopting a sample text, the sample text and the first text belong to the same field, and the sample text is provided with a label labeled in advance.
6. The text matching method of claim 5, wherein the domain is a legal document domain, and the label comprises one or more of: citations of french fries, case and paperwork types.
7. The text matching method of claim 1, wherein determining the matching result of the first text and the second text according to the semantic information of the first text, the element graph, the semantic information of the second text, and the structure information comprises:
performing fusion processing on a first element graph and a second element graph to obtain a fused element graph, wherein the first element graph is an element graph of the first text, and the second element graph is an element graph of the second text;
calculating a graph embedding feature vector corresponding to the fused element graph by adopting a first graph convolution network, and recording the graph embedding feature vector as a fused graph feature vector;
fusing the fusion semantic feature vector and the fusion map feature vector to obtain a fused feature vector, wherein the fusion semantic feature vector is obtained by fusing the semantic feature vector of the first text and the semantic feature vector of the second text;
and determining the matching result by adopting a first classifier according to the fused feature vector.
8. The text matching method of claim 7, wherein the first graph volume network and the first classifier are obtained by training a first preset graph volume network and a first preset classifier using a first training sample, and the first training sample comprises: before determining a matching result of a first text and a second text according to semantic information, an element diagram and structure information of the first text, semantic information and an element diagram of the first sample text, and a pre-labeled first label, the first label is used for indicating a real matching result of the first sample text and the second sample text, the method further comprises:
performing fusion processing on the element graph of the first sample text and the element graph of the second sample text to obtain a fused sample element graph;
calculating a graph embedding feature vector corresponding to the fused sample element graph by adopting the first preset graph convolution network, and recording the graph embedding feature vector as a fused sample graph feature vector;
fusing a fused sample semantic feature vector and the fused sample image feature vector to obtain a fused sample feature vector, wherein the fused sample semantic feature vector is obtained by fusing the semantic feature vector of the first sample text and the semantic feature vector of the second sample text;
determining a first prediction matching result according to the fused sample feature vector by adopting the first preset classifier;
and calculating a first prediction loss according to the first prediction matching result and the first label, and updating the first preset graph convolution network and the first preset classifier according to the first prediction loss until a preset training stop condition is met.
9. The text matching method according to claim 7, wherein the fused element graph includes: a plurality of aligned nodes, wherein edges between the aligned nodes and weights of the edges between the aligned nodes are nodes existing in both the first element graph and the second element graph, each aligned node has feature information, and the fusing of the element graph of the first text and the element graph of the second text includes:
determining the plurality of aligned nodes;
for every two aligned nodes, judging whether an edge exists between the two aligned nodes in the first element graph or the second element graph, and if so, constructing an edge between the two aligned nodes, wherein the weight of the edge between the aligned nodes is determined according to the weight of the edge between the aligned nodes in the first element graph and/or the weight of the edge between the aligned nodes in the second element graph;
for each alignment node, determining characteristic information of the alignment node according to the first sentence set and the second sentence set of the alignment node;
wherein the first set of sentences is a corresponding set of sentences in the first element graph, and the second set of sentences is a corresponding set of sentences in the second element graph.
10. The text matching method of claim 9, wherein determining feature information for each alignment node based on the first set of sentences and the second set of sentences for the alignment node comprises:
for each alignment node, performing fusion processing on semantic information of a first sentence subset and semantic information of a second sentence subset of the alignment node to obtain first characteristic information of the alignment node;
for each alignment node, calculating the similarity between a first sentence set and a second sentence set according to the semantic information of the first sentence set and the semantic information of the second sentence set of the alignment node, and taking the similarity as second characteristic information of the alignment node;
and determining the characteristic information of each alignment node according to the first characteristic information and the second characteristic information of the alignment node.
11. The text matching method according to claim 1, wherein the semantic information is a semantic feature vector, the structure information is the graph-embedded feature vector, and determining the matching result of the first text and the second text according to the semantic information, the element graph of the first text and the semantic information and the structure information of the second text comprises:
calculating a graph embedding characteristic vector corresponding to the element graph of the first text by adopting a second graph convolution network, and recording the graph embedding characteristic vector as a first graph embedding characteristic vector;
determining a total feature vector of the first text according to the first graph embedding feature vector and the semantic feature vector of the first text;
determining a total feature vector of the second text according to a second graph embedding feature vector and a semantic feature vector of the second text, wherein the second graph embedding feature vector is the graph embedding feature vector of the second text;
and determining the matching result according to the total feature vector of the first text and the total feature vector of the second text.
12. A text matching apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a first text;
the construction module is used for constructing an element graph of the first text;
the semantic extraction module is used for extracting semantic information of the first text;
a second obtaining module, configured to obtain semantic information and structural information of a second text, where the structural information of the second text includes: element graphs and/or graph embedding feature information of the second text, wherein the graph embedding feature information is obtained by calculation according to the element graphs;
the matching module is used for determining a matching result of the first text and the second text according to the semantic information and the element graph of the first text, the semantic information and the structure information of the second text;
the element graph comprises a plurality of nodes, edges between the nodes and weights of the edges, wherein the nodes are elements contained in the first text, the edges between the nodes are used for indicating an association relationship between two nodes connected by the edges, and the weights of the edges are used for indicating the association degree between the two nodes connected by the edges.
13. A storage medium having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, performs the steps of the text matching method according to any one of claims 1 to 11.
14. A terminal comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, characterized in that the processor, when executing the computer program, performs the steps of the text matching method of any of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210170758.4A CN114661861B (en) | 2022-02-23 | 2022-02-23 | Text matching method and device, storage medium and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210170758.4A CN114661861B (en) | 2022-02-23 | 2022-02-23 | Text matching method and device, storage medium and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114661861A true CN114661861A (en) | 2022-06-24 |
CN114661861B CN114661861B (en) | 2024-06-21 |
Family
ID=82027439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210170758.4A Active CN114661861B (en) | 2022-02-23 | 2022-02-23 | Text matching method and device, storage medium and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114661861B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115545001A (en) * | 2022-11-29 | 2022-12-30 | 支付宝(杭州)信息技术有限公司 | Text matching method and device |
CN115935195A (en) * | 2022-11-08 | 2023-04-07 | 华院计算技术(上海)股份有限公司 | Text matching method and device, computer readable storage medium and terminal |
CN116226332A (en) * | 2023-02-24 | 2023-06-06 | 华院计算技术(上海)股份有限公司 | Metaphor generation method and system based on concept metaphor theory |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274822A (en) * | 2018-11-20 | 2020-06-12 | 华为技术有限公司 | Semantic matching method, device, equipment and storage medium |
-
2022
- 2022-02-23 CN CN202210170758.4A patent/CN114661861B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274822A (en) * | 2018-11-20 | 2020-06-12 | 华为技术有限公司 | Semantic matching method, device, equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
BANG LIU等: "Matching Article Pairs with Graphical Decomposition and Convolutions", 《PROCEEDINGS OF THE 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGGUISTICS》, pages 1 - 5 * |
李纲等: "基于多层语义相似的技术供需文本匹配模型研究", 《数据分析与知识发现》, vol. 5, no. 12, pages 25 - 36 * |
李雄;丁治明;苏醒;郭黎敏;: "基于词项聚类的文本语义标签抽取研究", 计算机科学, no. 2, 15 November 2018 (2018-11-15), pages 427 - 431 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115935195A (en) * | 2022-11-08 | 2023-04-07 | 华院计算技术(上海)股份有限公司 | Text matching method and device, computer readable storage medium and terminal |
CN115935195B (en) * | 2022-11-08 | 2023-08-08 | 华院计算技术(上海)股份有限公司 | Text matching method and device, computer readable storage medium and terminal |
WO2024098636A1 (en) * | 2022-11-08 | 2024-05-16 | 华院计算技术(上海)股份有限公司 | Text matching method and apparatus, computer-readable storage medium, and terminal |
CN115545001A (en) * | 2022-11-29 | 2022-12-30 | 支付宝(杭州)信息技术有限公司 | Text matching method and device |
CN115545001B (en) * | 2022-11-29 | 2023-04-07 | 支付宝(杭州)信息技术有限公司 | Text matching method and device |
CN116226332A (en) * | 2023-02-24 | 2023-06-06 | 华院计算技术(上海)股份有限公司 | Metaphor generation method and system based on concept metaphor theory |
CN116226332B (en) * | 2023-02-24 | 2024-02-06 | 华院计算技术(上海)股份有限公司 | Metaphor generation method and system based on concept metaphor theory |
Also Published As
Publication number | Publication date |
---|---|
CN114661861B (en) | 2024-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Abdullah et al. | Fake news classification bimodal using convolutional neural network and long short-term memory | |
CN114661861B (en) | Text matching method and device, storage medium and terminal | |
CN112819023B (en) | Sample set acquisition method, device, computer equipment and storage medium | |
CN110276023B (en) | POI transition event discovery method, device, computing equipment and medium | |
CN111858940B (en) | Multi-head attention-based legal case similarity calculation method and system | |
CN113392209B (en) | Text clustering method based on artificial intelligence, related equipment and storage medium | |
CN113722490B (en) | Visual rich document information extraction method based on key value matching relation | |
WO2023040493A1 (en) | Event detection | |
WO2021190662A1 (en) | Medical text sorting method and apparatus, electronic device, and storage medium | |
CN114491018A (en) | Construction method of sensitive information detection model, and sensitive information detection method and device | |
CN113806660B (en) | Data evaluation method, training device, electronic equipment and storage medium | |
CN115983271A (en) | Named entity recognition method and named entity recognition model training method | |
CN117891939A (en) | Text classification method combining particle swarm algorithm with CNN convolutional neural network | |
CN117093604B (en) | Search information generation method, apparatus, electronic device, and computer-readable medium | |
CN112270189B (en) | Question type analysis node generation method, system and storage medium | |
US20230090601A1 (en) | System and method for polarity analysis | |
CN116151258A (en) | Text disambiguation method, electronic device and storage medium | |
CN115129885A (en) | Entity chain pointing method, device, equipment and storage medium | |
CN115098619A (en) | Information duplication eliminating method and device, electronic equipment and computer readable storage medium | |
CN114328894A (en) | Document processing method, document processing device, electronic equipment and medium | |
CN113901817A (en) | Document classification method and device, computer equipment and storage medium | |
CN113553410A (en) | Long document processing method, processing device, electronic equipment and storage medium | |
CN112100336A (en) | Method and device for identifying preservation time of file and storage medium | |
CN116992874B (en) | Text quotation auditing and tracing method, system, device and storage medium | |
CN114662480B (en) | Synonymous label judging method, synonymous label judging device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |