CN109086303A - The Intelligent dialogue method, apparatus understood, terminal are read based on machine - Google Patents
The Intelligent dialogue method, apparatus understood, terminal are read based on machine Download PDFInfo
- Publication number
- CN109086303A CN109086303A CN201810642836.XA CN201810642836A CN109086303A CN 109086303 A CN109086303 A CN 109086303A CN 201810642836 A CN201810642836 A CN 201810642836A CN 109086303 A CN109086303 A CN 109086303A
- Authority
- CN
- China
- Prior art keywords
- text
- vector
- word
- answer
- described problem
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides a kind of Intelligent dialogue method, apparatus, terminal that understanding is read based on machine, and the above method includes: to obtain the problem of user proposes text corresponding with problem;Word segmentation processing and vectorization processing are carried out to problem and text, obtain problem vector sum text vector;Problem vector sum text vector is inputted into attention model, obtains primary vector and secondary vector, primary vector is used to indicate problem for noticing the disturbance degree of any word in text, and secondary vector is used to indicate text for generating the disturbance degree of problem;Answer starting point and answer terminating point are determined in the text according to primary vector and secondary vector, and the paragraph between answer starting point and answer terminating point is determined as to the answer of problem." problem-answer " is right without presetting for method of the invention, can neatly answer the various problems of user, overcome the defect that the prior art needs continuous maintenance issues library, reduces data and updates cost.
Description
Technical field
The present invention relates to field of artificial intelligence, specifically, reading understanding based on machine the present invention relates to a kind of
Intelligent dialogue method, apparatus, terminal.
Background technique
Existing Intelligent dialogue system obtains answer mainly by the problem of retrieval user's input.Nature based on information retrieval
Language conversation technology needs will be talked with corpus on a large scale and will be indexed in a manner of " problem-answer " pair, when online conversation, be led to
It crosses to search for find and inputs similar problem to user, and its corresponding answer is returned into user.However, when user's input and library
When middle corpus matching degree is lower, the system that not can guarantee returns to semantic relevant dialogue, the existing Intelligent dialogue system expandability
It is low, it can not also generate the reply of corpus never.
Therefore, existing Intelligent dialogue system is limited to the data volume prestored in corpus, and a large amount of problems are unable to get back
It answers, especially when the new problem that user proposes, system is even more that can not answer or give an irrelevant answer.Further, since FAQs and
Answer can change often, and labor intensive is needed to regularly update data in corpus.
Summary of the invention
The purpose of the present invention is intended at least can solve above-mentioned one of technological deficiency.
In a first aspect, the present invention provides a kind of Intelligent dialogue method read and understood based on machine, include the following steps:
Obtain the problem of user proposes text corresponding with problem;
Word segmentation processing and vectorization processing are carried out to problem and text, obtain problem vector sum corresponding to each word in problem
The corresponding text vector of each word in text;
Problem vector sum text vector is inputted into attention model, obtains primary vector and secondary vector, primary vector is used
In indication problem for noticing the disturbance degree of any word in text, secondary vector is used to indicate text for generating the shadow of problem
Loudness;
Answer starting point and answer terminating point are determined in the text according to primary vector and secondary vector, and answer is originated
Paragraph between point and answer terminating point is determined as the answer of problem.
Further, the problem of user proposes text corresponding with problem is obtained, comprising: the problem of user proposes is obtained,
Text corresponding with problem is searched in network and/or database.
Further, word segmentation processing is carried out to problem and text and vectorization is handled, it is corresponding to obtain each word in problem
The corresponding text vector of each word in problem vector sum text, comprising:
Word segmentation processing is carried out to problem and text respectively;
Respectively by the word segmentation result input vector model of problem and text, obtains each word in problem corresponding first and ask
Inscribe corresponding first text vector of each word in vector sum text;
The first text vector of first problem vector sum is updated using bidirectional circulating neural network, obtains first problem vector pair
The corresponding text vector of the first text vector of the problem of answering vector sum.
Further, respectively by before the word segmentation result input vector model of problem and text, method further include:
Remove the punctuation mark in the word segmentation result of text and the word segmentation result of problem.
Further, problem vector sum text vector is inputted into attention model, obtains primary vector and secondary vector, wrapped
It includes:
Problem vector sum text vector is inputted into the first nerves network based on attention mechanism, the power that gains attention distribution is general
Rate is distributed, and it is any in text for noticing that the attention force value in Automobile driving probability distribution is used to indicate any word in problem
The disturbance degree of word;
The corresponding attention force value of each word is weighted and averaged problem vector, obtains text as weight using in text
In the corresponding primary vector of each word;
Be maximized from the corresponding attention force value of word each in text, will to the corresponding maximum value of word each in text into
Row normalization, the maximum value after normalizing are weighted and averaged text vector, obtain secondary vector as weight.
Further, answer starting point and answer terminating point are determined according to primary vector and secondary vector in the text, and
Paragraph between answer starting point and answer terminating point is determined as to the answer of problem, comprising:
Primary vector and secondary vector are inputted into nervus opticus network, obtain in text each word as answer starting point
First probability value;
Primary vector, secondary vector and the first probability value are inputted into third nerve network, obtain each word conduct in text
Second probability value of answer terminating point;
According to the first probability value and the second probability value, the paragraph calculated in text between each word is general as the third of answer
Rate value takes answer of the corresponding paragraph of maximum third probability value as problem.
Further, nervus opticus network and third nerve network are two-way LSTM time recurrent neural network.
Second aspect, the present invention also provides a kind of Intelligent dialogue devices read and understood based on machine, comprising:
Text acquiring unit, for obtaining the problem of user proposes text corresponding with problem;
Pretreatment unit obtains each word in problem for carrying out word segmentation processing and vectorization processing to problem and text
The corresponding text vector of each word in corresponding problem vector sum text;
Attention computing unit, for by problem vector sum text vector input attention model, obtain primary vector and
Secondary vector, primary vector are used to indicate problem for noticing that the disturbance degree of any word in text, secondary vector are used to indicate
Disturbance degree of the text for generation problem;
Answer positioning unit, for determining that answer starting point and answer are whole in the text according to primary vector and secondary vector
Stop, and the paragraph between answer starting point and answer terminating point is determined as to the answer of problem.
The third aspect, the present invention also provides a kind of Intelligent dialogue terminals read and understood based on machine comprising:
One or more processors;
Memory;
One or more application program, wherein one or more of application programs are stored in the memory and quilt
It is configured to be executed by one or more of processors, one or more of programs are configured to: executing in first aspect and appoint
The Intelligent dialogue method understood is read described in one embodiment based on machine.
Fourth aspect, the present invention also provides a kind of computer readable storage mediums, are stored thereon with computer program,
It is characterized in that, is realized in first aspect when which is executed by processor and understanding is read based on machine described in any embodiment
Intelligent dialogue method.
Above-mentioned reads Intelligent dialogue method, apparatus, terminal and the computer readable storage medium understood based on machine, first
Matched text is retrieved based on problem, then precise positioning is to the paragraph that can be used as answer from the text retrieved, i.e., directly
Relevant paragraph is intercepted from existing text as answer, since existing text is write by people, in Baidupedia
Inside perhaps in discussion bar user answer, therefore, the answer that method through this embodiment generates accords with the rule of natural language completely,
Improve answer can be readability;Secondly, the method for the present embodiment is right without presetting " problem-answer ", it is also no longer limited
The library in prestore the problem of, can neatly answer the various problems of user, overcome the prior art and need continuous maintenance issues library
Defect reduces the cost of data update;In addition, the method for the present embodiment has comprehensively considered and has asked based on attention mechanism is utilized
Inscribing influences text, text on the two-way attention of problem, and the relationship between problem and text can be more fully understood, so that
The answer for obtaining final output is more accurate, avoids giving an irrelevant answer.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the Intelligent dialogue method flow diagram that understanding is read based on machine of one embodiment;
Fig. 2 is the Intelligent dialogue method flow diagram that understanding is read based on machine of another embodiment;
Fig. 3 is the Intelligent dialogue method flow diagram that understanding is read based on machine of another embodiment;
Fig. 4 is the structural block diagram that the Intelligent dialogue square law device understood is read based on machine of one embodiment;
Fig. 5 is the knot that pretreatment unit in the Intelligent dialogue square law device understood is read based on machine of another embodiment
Structure block diagram;
Fig. 6 is the knot that pretreatment unit in the Intelligent dialogue square law device understood is read based on machine of another embodiment
Structure block diagram;
Fig. 7 is the schematic diagram of internal structure for reading the Intelligent dialogue method terminal understood in one embodiment based on machine.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in specification of the invention
Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition
Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member
Part is " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be
Intermediary element.In addition, " connection " used herein or " coupling " may include being wirelessly connected or wirelessly coupling.It is used herein to arrange
Diction "and/or" includes one or more associated wholes for listing item or any cell and all combinations.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term), there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless idealization or meaning too formal otherwise will not be used by specific definitions as here
To explain.
Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal device " both include wireless communication
The equipment of number receiver, only has the equipment of the wireless signal receiver of non-emissive ability, and including receiving and emitting hardware
Equipment, have on both-way communication chain road, can execute both-way communication reception and emit hardware equipment.This equipment
It may include: honeycomb or other communication apparatus, shown with single line display or multi-line display or without multi-line
The honeycomb of device or other communication apparatus;PCS (Personal Communications Service, person communication system), can
With combine voice, data processing, fax and/or data communication capabilities;PDA (Personal Digital Assistant, it is personal
Digital assistants), it may include radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, day
It goes through and/or GPS (Global Positioning System, global positioning system) receiver;Conventional laptop and/or palm
Type computer or other equipment, have and/or the conventional laptop including radio frequency receiver and/or palmtop computer or its
His equipment." terminal " used herein above, " terminal device " can be it is portable, can transport, be mounted on the vehicles (aviation,
Sea-freight and/or land) in, or be suitable for and/or be configured in local runtime, and/or with distribution form, operate in the earth
And/or any other position operation in space." terminal " used herein above, " terminal device " can also be communicating terminal, on
Network termination, music/video playback terminal, such as can be PDA, MID (Mobile Internet Device, mobile Internet
Equipment) and/or mobile phone with music/video playing function, it is also possible to the equipment such as smart television, set-top box.
Those skilled in the art of the present technique are appreciated that remote network devices used herein above comprising but be not limited to count
The cloud that calculation machine, network host, single network server, multiple network server collection or multiple servers are constituted.Here, Yun Youji
It is constituted in a large number of computers or network servers of cloud computing (Cloud Computing), wherein cloud computing is distributed computing
One kind, a super virtual computer consisting of a loosely coupled set of computers.In the embodiment of the present invention, distal end
It can be realized and be communicated by any communication modes between the network equipment, terminal device and WNS server, including but not limited to, is based on
The mobile communication of 3GPP, LTE, WIMAX, based on TCP/IP, the computer network communication of udp protocol and based on bluetooth, infrared
The low coverage wireless transmission method of transmission standard.
Below to it is provided in an embodiment of the present invention it is a kind of based on machine read understand Intelligent dialogue method be introduced, join
As shown in Figure 1, the method for the present embodiment includes:
Step S101, the problem of user proposes text corresponding with problem is obtained;
Step S102, word segmentation processing is carried out to problem and text and vectorization is handled, it is corresponding to obtain each word in problem
The corresponding text vector of each word in problem vector sum text;
Step S103, problem vector sum text vector is inputted into attention model, obtains primary vector and secondary vector, the
One vector is used to indicate problem for noticing that the disturbance degree of any word in text, secondary vector are used to indicate text for generating
The disturbance degree of problem;
Step S104, answer starting point and answer terminating point are determined according to primary vector and secondary vector in the text, and
Paragraph between answer starting point and answer terminating point is determined as to the answer of problem.
In the embodiment of the present invention, firstly, the problem of being proposed based on user, automatic to search for text relevant to the problem, search
The text that rope arrives is from network or database;Then, problem and the text searched are pre-processed (as participle, to
Quantification treatment etc.), problem vector sum text vector is obtained, so that problem and the expression way of text meet attention model
Input;Then, problem vector sum text vector is input in attention model simultaneously, obtain common announcement problem and text it
Between degree of influencing each other primary vector and secondary vector, i.e., using attention mechanism (AM, Attention Mechanism) imitate
The mode of human intelligible text is exactly generally when the mankind read text with problem, meeting is because of problem and in text
Each word distributes different attentions, that is to say, that and people can be easier to notice certain words relevant to problem in text, and
Ignore other unrelated words, primary vector and secondary vector just describe the Automobile driving situation between problem and text;Most
Afterwards, answer starting point and answer terminating point are determined according to primary vector and secondary vector in the text, and by answer starting point and
Paragraph between answer terminating point is determined as the answer of problem.
Compared with prior art, the method for the present embodiment first retrieves matched text based on problem, then from retrieving
Precise positioning is to the paragraph that can be used as answer in text, i.e., relevant paragraph is intercepted directly from existing text as answer, by
In existing text be to be write by people, as in Baidupedia perhaps in discussion bar user answer, therefore, by this implementation
The answer that the method for example generates accords with the rule of natural language completely, and improve answer can be readability.Secondly, the side of the present embodiment
Method is right without presetting " problem-answer ", is also no longer limited by the problem of prestoring library, can neatly answer the various of user and ask
Topic, overcomes the defect that the prior art needs continuous maintenance issues library, reduces the cost of data update.In addition, the present embodiment
Method, based on attention mechanism is utilized, having comprehensively considered problem influences text, text on the two-way attention of problem, energy
The relationship between problem and text is enough more fully understood, so that the answer of final output is more accurate, avoids giving an irrelevant answer.
Further, step S101 is specifically included: being obtained the problem of user proposes, is searched in network and/or database
Text corresponding with problem.
For example, extracting corresponding keyword in the problem of proposing from user, using the keyword extracted as term, lead to
The modes such as crawler technology are crossed, the problem of proposing with user corresponding text is searched from network.Wherein, the side of keyword is extracted
Method is the prior art, is not being repeated again.It is of course also possible to directly using entire problem as term, in a network search with
The corresponding text of the problem of user proposes.
For example, calculating the matching degree of the problem of user proposes and the text prestored in database, calculated according to matching degree
Recall (recall rate) value returns to the maximum text of recall value, as text corresponding the problem of proposition with user, wherein
A large amount of text is previously stored in database.In practice, settable threshold value, using recall value be higher than threshold value text as with
The corresponding text of the problem of user proposes;Or, being ranked up according to recall value to text, forward text conduct of sorting is obtained
The corresponding text of the problem of being proposed with user.
When getting multistage text by network or database, this multistage text is joined end to end and synthesizes one section of text,
As text corresponding the problem of proposition with user, the text based on the synthesis carries out subsequent vectorization processing.
Further, it is the accuracy and efficiency of balance search text, above two searching method can also be combined, be had
Body includes: corresponding text the problem of search in the database with user's proposition;It is corresponding when that can not be got from database
When text, search and corresponding text the problem of user's proposition in a network.In addition, can be incited somebody to action during web search
The text searched is added in database, to achieve the purpose that automatically update, expanding data library, reduce that data update at
This.
Textual resources in database are limited, but the text reliability searched is higher;And the textual resources packet in network
Luo Wanxiang, but all leave a question open with the matching degree of problem and accuracy, and searching cost is higher.Therefore, the present embodiment combined data library
, when the different suitable texts of database search, net is recycled preferentially by the way of database search with web search mode
The mode of network search obtains the problem of proposing with user corresponding text, to reach the accuracy and efficiency of balance search text
Purpose.
Further, as shown in Fig. 2, step S102 is specifically included:
Step S201, word segmentation processing is carried out to problem and text respectively;
Step S202, respectively by the word segmentation result input vector model of problem and text, each word pair in problem is obtained
Corresponding first text vector of each word in the first problem vector sum text answered;
Wherein, such as glove, word2vec model can be selected in vectorization model.
Step S203, the first text vector of first problem vector sum is updated using bidirectional circulating neural network, obtains first
The corresponding text vector of the first text vector of problem vector sum corresponding to problem vector.
For example, user propose the problem of be " what oxygen bag is ", search with user propose the problem of corresponding
Text is " today, oxygen bag was sent out in Beijing, and oxygen bag is the sack for filling oxygen.", to what is obtained after problem progress word segmentation processing
Three words are " oxygen bag " "Yes" " what ", and three obtained word is inputted respectively after text vector model and obtains three first and asks
Vector is inscribed, " oxygen bag " corresponding first problem vector is X1=(x1,1,x1,2,…,x1,v), corresponding first text of "Yes" to
Amount is X2=(x2,1,x2,2,…,x2,v), " what " corresponding first problem vector is X3=(x3,1,x3,2,…,x3,v), wherein V
For preset vector dimension.Similarly, 14 words, respectively " today " are obtained after carrying out word segmentation processing to the corresponding text of problem
" Beijing " " hair " " " " oxygen bag " ", " " oxygen bag " "Yes" " being used to " " dress " " oxygen " " " " sack " ".", 14 will obtained
A word input vector model obtains corresponding 14 the first text vector Y1、Y2、…、Y14, for example, " today " it is corresponding to
Amount is Y1=(y1,1,y1,2,…,y1,v), " Beijing " corresponding vector is Y2=(y2,1,y2,2,…,y2,v)。
Then, by bidirectional circulating neural network to first problem vector X1、X2、X3It is updated, obtains updated ask
Inscribe vector X'1、X'2、X'3.Detailed process are as follows: obtain the permutation with positive order (X of first problem vector1, X2, X3) and inverted order arrangement (X3,
X2, X1), by (X1, X2, X3) and (X3, X2, X1) input bidirectional circulating neural network progress context study, pass through bidirectional circulating mind
Updated 3 problem vectors X' is exported through network1、X'2、X'3.Similarly, the permutation with positive order (Y of the first text vector is obtained1,
Y2..., Y14) and inverted order arrangement (Y14, Y13..., Y1), by (Y1, Y2..., Y14) and (Y14, Y13..., Y1) input bidirectional circulating mind
Study context is carried out in network, obtains updated 14 text vector Y'1、……、Y'14。
Wherein, bidirectional circulating neural network selects LSTM structure.
First problem vector sum the first text vector precision obtained by vectorization model is not high, therefore, this implementation
Example carries out context study to problem and text using bidirectional circulating neural network respectively, and same word can be identified in different languages
Difference in border is semantic, to achieve the purpose that optimize the first text vector of first problem vector sum, for example, " apple " one
Word there is different semantemes in " I wants to eat apple " and " computer that I newly buys is apple ", and utilize bidirectional circulating nerve net
Network can disambiguation well, enable optimization after text vector and problem vector cross more accurately characterization problems and text
This semanteme.
Further, as shown in figure 3, step S102 is specifically included:
Step S301, word segmentation processing is carried out to problem and text respectively;
Step S302, the punctuation mark in the word segmentation result of text and the word segmentation result of problem is removed;
Step S303, respectively by word segmentation result and text word segmentation result input vector mould the problem of removing punctuation mark
Type obtains corresponding first text vector of each word in the corresponding first problem vector sum text of each word in problem;
Step S304, the first text vector of first problem vector sum is updated using bidirectional circulating neural network, obtains first
The corresponding text vector of the first text vector of problem vector sum corresponding to problem vector.
By previous example it is found that further including punctuation mark in word segmentation result, and these punctuation marks are obviously to semanteme
Understand no any help, therefore, before carrying out vectorization processing to text and problem, first removes the participle of problem and text
As a result the punctuation mark in only carries out vectorization processing to the word segment in word segmentation result.Previous example is connect, only for text
Need to " today " " Beijing " " hair " " " " oxygen bag " " oxygen bag " "Yes" " being used to " " dress " " oxygen " " " " sack " this 12
Word carries out vectorization processing, finally obtains 12 the first text vectors, reduces data processing amount, improve treatment effeciency.
Further, step S103 is specifically included:
Step S401, problem vector sum text vector is inputted into the first nerves network based on attention mechanism, is infused
It anticipates the distribution of power allocation probability, the attention force value in Automobile driving probability distribution is used to indicate in problem any word for noticing
The disturbance degree of any word in text;
Step S402, using in text, the corresponding attention force value of each word is weighted and averaged problem vector as weight,
Obtain the corresponding primary vector of each word in text;
Step S403, it is maximized from the corresponding attention force value of word each in text, it will be corresponding to word each in text
Maximum value be normalized, using normalize after maximum value as weight, text vector is weighted and averaged, obtains second
Vector.
By taking oxygen bag above-mentioned as an example, problem vector X'1、X'2、X'3In, it is word-based " oxygen bag ", it is " modern reading text
Oxygen bag has been sent out in its Beijing, and oxygen bag is the sack for filling oxygen." when, to " today " " Beijing " " hair " " " " oxygen in text
Airbag " ", " " oxygen bag " "Yes" " being used to " " dress " " oxygen " " " " sack " "." attention of each word is different, reader
It will not pay close attention to words such as " today " " Beijing " " hair " "Yes", it is easier to pay close attention to " oxygen bag " "Yes" " being used to " " oxygen " " sack "
Equal words.Therefore, using attention mechanism, each word in available problem vector is to noticing each word in text vector
This disturbance degree is known as paying attention to force value by disturbance degree.For example, obtaining problem vector X' by attention model1Corresponding one group
Notice that force value is (α1,1,α1,2,…,α1,14), wherein pay attention to force value α1,1It is first word " oxygen bag " in problem to attention
The disturbance degree of first word " today " into text, pays attention to force value α1,14It is first word " oxygen bag " in problem to attention
Into text the 14th word "." disturbance degree, herein, not using the scheme for filtering out punctuation mark in problem and text;Together
Reason, obtains problem vector X' by attention model2Corresponding one group of attention force value is (α2,1,α2,2,…,α2,14), problem vector
X'3Corresponding one group of attention force value is (α3,1,α3,2,…,α3,14), Automobile driving probability distribution A is finally obtained, as follows:
Wherein, αi,jIndicate to know i-th of word in problem to the disturbance degree for noticing j-th of word in text, i.e. attention
Value;I is the positive integer no more than n, and n is question length, i.e., the quantity of the word obtained after problem participle, in the above example n=
3;J is the positive integer no more than m, and m is text size, i.e., the quantity of the word obtained after text participle, in the above example m=
14。
Wherein, attention model can be selected common neural metwork training and obtain, as CNN, RNN, BiRNN, GRU, LSTM,
The training method of Deep LSTM etc., model are general neural network training method, are not being repeated herein.Automobile driving is general
Rate distribution calculation can there are many kinds of, by test after it is preferable to use performance the side MLP preferable linear after
Method.
Then, the corresponding attention force value of each word is weighted and averaged problem vector, obtains as weight using in text
The corresponding primary vector of each word in text, that is, using every a line of Automobile driving probability distribution A as weight, to problem to
Measure X'1、X'2、X'3It is weighted and averaged, the corresponding primary vector of each word in text is obtained, for example, first word in text
" today " corresponding primary vector is W1=α1,1X'1+α2,1X'2+α3,1X'3, according to the method described above one be obtained 14 first to
Measure W1、W2、……、W14.Wherein, in order to guarantee the objectivity of weight, in advance to every a line in Automobile driving probability distribution A
Attention force value carried out normalized, e.g., make α1,1+α2,1+α3,1=1.
Then, it is maximized from the corresponding attention force value of word each in text, that is, take Automobile driving probability distribution A
In every row maximum value, obtain 14 maximum value max { α1,1, α2,1, α3,1}、……max{α1,14, α2,14, α3,14, most to 14
Big value is normalized to obtain (β1,β2,…,β14), with the maximum value (β after normalization1,β2,…,β14) it is used as weight, to text
Vector is weighted and averaged, and obtains secondary vector U=β1Y'1+β2Y'2+…+β14Y'14。
Further, step S104 is specifically included:
Step S501, primary vector and secondary vector are inputted into nervus opticus network, obtains each word conduct in text and answers
First probability value of case starting point;
Step S502, primary vector, secondary vector and the first probability value are inputted into third nerve network, obtained each in text
Second probability value of a word as answer terminating point;
Step S503, it according to the first probability value and the second probability value, calculates the paragraph conduct in text between each word and answers
The third probability value of case takes answer of the corresponding paragraph of maximum third probability value as problem.
Wherein, nervus opticus network and the preferably bidirectional LSTM of third nerve network (time recurrent neural network).
Above-mentioned example is connect, firstly, by 14 primary vector W1、W2、……、W14It is two-way with secondary vector U input first
LSTM, 14 the first probability value { P exported1,P2,……,P14, wherein P1Indicate that first word " today " is made in text
For the probability of answer starting point.
Then, by the first probability value { P1,P2,……,P14, 14 primary vector W1、W2、……、W14With secondary vector U
The second two-way LSTM is inputted, 14 the second probability value { Q exported1,Q2,……,Q14, wherein Q1It indicates first in text
Probability of a word " today " as answer terminating point.
Finally, calculating the paragraph in text between each word as answer according to the first probability value and the second probability value
Third probability value, that is, calculate the first probability numbers { P1,P2,……,P14And the second probability value { Q1,Q2,……,Q14Group two-by-two
The product P of conjunctioni·Qj, as third probability value, wherein and i=1,2 ..., 14, j=1,2 ..., 14, and i < j, take all thirds general
Rate value Pi·QjIn maximum value, that is, find one group of answer starting point before answer terminating point, and maximum group of third probability value
It closes, the paragraph between the corresponding answer starting point of the combination and answer terminating point is the answer of problem.
Based on inventive concept identical with the above-mentioned Intelligent dialogue method for reading understanding based on machine, as shown in figure 4, this hair
Bright embodiment additionally provides a kind of Intelligent dialogue device 40 read and understood based on machine, comprising:
Text acquiring unit 401, for obtaining the problem of user proposes text corresponding with problem;
Pretreatment unit 402 obtains each in problem for carrying out word segmentation processing and vectorization processing to problem and text
The corresponding text vector of each word in problem vector sum text corresponding to word;
Attention computing unit 403 obtains primary vector for problem vector sum text vector to be inputted attention model
And secondary vector, primary vector are used to indicate problem for noticing the disturbance degree of any word in text, secondary vector is for referring to
Show text for the disturbance degree of generation problem;
Answer positioning unit 404, for determining answer starting point in the text according to primary vector and secondary vector and answering
Case terminating point, and the paragraph between answer starting point and answer terminating point is determined as to the answer of problem.
Further, text acquiring unit 401 is specifically used for: the problem of user proposes is obtained, in network and/or database
It is middle to search for text corresponding with problem.
Further, as shown in figure 5, pretreatment unit 402 includes:
Subelement 501 is segmented, for carrying out word segmentation processing to problem and text respectively;
Vectorization subelement 502, for by the word segmentation result input vector model of problem and text, obtaining problem respectively
In corresponding first text vector of each word in the corresponding first problem vector sum text of each word;
Optimize subelement 503, for updating the first text vector of first problem vector sum using bidirectional circulating neural network,
Obtain the corresponding text vector of the first text vector of problem vector sum corresponding to first problem vector.
Further, as shown in fig. 6, further including filtering subelement 504 in pretreatment unit, filtering subelement is for removing
Punctuation mark in the word segmentation result of text and the word segmentation result of problem.
Further, attention computing unit 403 is specifically used for:
Problem vector sum text vector is inputted into the first nerves network based on attention mechanism, the power that gains attention distribution is general
Rate is distributed, and it is any in text for noticing that the attention force value in Automobile driving probability distribution is used to indicate any word in problem
The disturbance degree of word;
The corresponding attention force value of each word is weighted and averaged problem vector, obtains text as weight using in text
In the corresponding primary vector of each word;
Be maximized from the corresponding attention force value of word each in text, will to the corresponding maximum value of word each in text into
Row normalization, the maximum value after normalizing are weighted and averaged text vector, obtain secondary vector as weight.
Further, answer positioning unit 404 is specifically used for:
Primary vector and secondary vector are inputted into nervus opticus network, obtain in text each word as answer starting point
First probability value;
Primary vector, secondary vector and the first probability value are inputted into third nerve network, obtain each word conduct in text
Second probability value of answer terminating point;
According to the first probability value and the second probability value, the paragraph calculated in text between each word is general as the third of answer
Rate value takes answer of the corresponding paragraph of maximum third probability value as problem.
Wherein, nervus opticus network and third nerve network are two-way LSTM time recurrent neural network.
The Intelligent dialogue read the Intelligent dialogue device understood based on machine with read understanding based on machine of the present embodiment
Method uses identical inventive concept, can obtain identical technical effect, details are not described herein.
Based on inventive concept identical with the above-mentioned Intelligent dialogue method for reading understanding based on machine, the embodiment of the present invention is also
Provide a kind of Intelligent dialogue terminal read and understood based on machine comprising: one or more processors;Memory;One
Or multiple application programs, wherein one or more application programs are stored in memory and are configured as by one or more
It manages device to execute, one or more programs are configured to: executing in any of the above-described embodiment and the intelligence understood is read based on machine
Dialogue method.
Fig. 7 is the schematic diagram of internal structure for reading the Intelligent dialogue terminal understood in one embodiment based on machine.Such as Fig. 7
Shown, which includes processor, non-volatile memory medium, memory and the network interface connected by system bus.Its
In, the non-volatile memory medium of the computer equipment is stored with operating system, database and computer-readable instruction, database
In can be stored with control information sequence, when which is executed by processor, processor may make to realize a kind of base
The Intelligent dialogue method understood is read in machine.The processor of the computer equipment is for providing calculating and control ability, support
The operation of entire computer equipment.Computer-readable instruction can be stored in the memory of the computer equipment, which can
When reading instruction is executed by processor, processor may make to execute a kind of Intelligent dialogue method read and understood based on machine.The meter
The network interface for calculating machine equipment is used for and terminal connection communication.It will be understood by those skilled in the art that structure shown in Fig. 7,
The only block diagram of part-structure relevant to application scheme, does not constitute the calculating being applied thereon to application scheme
The restriction of machine equipment, specific computer equipment may include more certain than more or fewer components as shown in the figure, or combination
Component, or with different component layouts.
It is provided in this embodiment that the Intelligent dialogue terminal understood is read based on machine, it uses and is read with above-mentioned based on machine
The identical inventive concept of Intelligent dialogue method of understanding, beneficial effect having the same, details are not described herein.
Based on inventive concept identical with the above-mentioned Intelligent dialogue method for reading understanding based on machine, the embodiment of the present invention is also
A kind of computer readable storage medium is provided, computer program is stored thereon with, which is characterized in that the program is held by processor
That any of the above-described embodiment is realized when row reads the Intelligent dialogue method understood based on machine.
Storage medium mentioned above can be read-only memory, disk or CD etc..
It should be understood that although each step in the flow chart of attached drawing is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, can execute in the other order.Moreover, at least one in the flow chart of attached drawing
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, execution sequence, which is also not necessarily, successively to be carried out, but can be with other
At least part of the sub-step or stage of step or other steps executes in turn or alternately.
The above is only some embodiments of the invention, it is noted that those skilled in the art are come
It says, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should be regarded as
Protection scope of the present invention.
Claims (10)
1. a kind of read the Intelligent dialogue method understood based on machine, which comprises the steps of:
Obtain the problem of user proposes text corresponding with described problem;
Word segmentation processing and vectorization processing are carried out to described problem and the text, obtain in described problem that each word is corresponding to ask
Inscribe described in vector sum the corresponding text vector of each word in text;
Text vector described in described problem vector sum is inputted into attention model, obtains primary vector and secondary vector, described the
One vector is used to indicate described problem for noticing that the disturbance degree of any word in the text, the secondary vector are used to indicate
Disturbance degree of the text for generation described problem;
Answer starting point and answer terminating point are determined in the text according to the primary vector and the secondary vector, and will
Paragraph between the answer starting point and the answer terminating point is determined as the answer of described problem.
2. the method according to claim 1, wherein the acquisition user is corresponding with described problem the problem of proposition
Text, comprising:
The problem of user proposes is obtained, text corresponding with described problem is searched in network and/or database.
3. the method according to claim 1, wherein described carry out word segmentation processing to described problem and the text
Handled with vectorization, obtain the corresponding text of each word in text described in problem vector sum corresponding to each word in described problem to
Amount, comprising:
Word segmentation processing is carried out to described problem and the text respectively;
Respectively by the word segmentation result input vector model of described problem and the text, it is corresponding to obtain each word in described problem
First problem vector sum described in corresponding first text vector of each word in text;
The first text vector described in the first problem vector sum is updated using bidirectional circulating neural network, described first is obtained and asks
Inscribe the corresponding text vector of the first text vector described in problem vector sum corresponding to vector.
4. according to the method described in claim 3, it is characterized in that, described respectively by the participle knot of described problem and the text
Before fruit input vector model, the method also includes:
Remove the punctuation mark in the word segmentation result of the text and the word segmentation result of described problem.
5. the method according to claim 1, wherein described input text vector described in described problem vector sum
Attention model obtains primary vector and secondary vector, comprising:
Text vector described in described problem vector sum is inputted into the first nerves network based on attention mechanism, the power that gains attention point
With probability distribution, the attention force value in the Automobile driving probability distribution is used to indicate in described problem any word for paying attention to
The disturbance degree of any word into the text;
The corresponding attention force value of each word is weighted and averaged described problem vector, obtains as weight using in the text
The corresponding primary vector of each word in the text;
It is maximized from the corresponding attention force value of word each in the text, it will be to the corresponding maximum of word each in the text
Value is normalized, using normalize after maximum value as weight, the text vector is weighted and averaged, obtain second to
Amount.
6. the method according to claim 1, wherein described exist according to the primary vector and the secondary vector
Answer starting point and answer terminating point are determined in the text, and will be between the answer starting point and the answer terminating point
Paragraph is determined as the answer of described problem, comprising:
The primary vector and the secondary vector are inputted into nervus opticus network, obtain in the text each word as answer
First probability value of starting point;
The primary vector, the secondary vector and first probability value are inputted into third nerve network, obtain the text
In second probability value of each word as answer terminating point;
According to first probability value and second probability value, the paragraph in the text between each word is calculated as answer
Third probability value, take answer of the corresponding paragraph of maximum third probability value as described problem.
7. according to the method described in claim 6, it is characterized in that, the nervus opticus network and the third nerve network are equal
For two-way LSTM time recurrent neural network.
8. a kind of read the Intelligent dialogue device understood based on machine characterized by comprising
Text acquiring unit, for obtaining the problem of user proposes text corresponding with described problem;
Pretreatment unit obtains described problem for carrying out word segmentation processing and vectorization processing to described problem and the text
In the corresponding text vector of each word in text described in problem vector sum corresponding to each word;
Attention computing unit, for by text vector described in described problem vector sum input attention model, obtain first to
Amount and secondary vector, the primary vector are used to indicate described problem for noticing the disturbance degree of any word in the text,
The secondary vector is used to indicate the text for the disturbance degree of generation described problem;
Answer positioning unit, for determining answer starting point in the text according to the primary vector and the secondary vector
With answer terminating point, and the paragraph between the answer starting point and the answer terminating point is determined as answering for described problem
Case.
9. a kind of read the Intelligent dialogue terminal understood based on machine, characterized in that it comprises:
One or more processors;
Memory;
One or more application program, wherein one or more of application programs are stored in the memory and are configured
To be executed by one or more of processors, one or more of programs are configured to: being executed according to claim 1 to 7
Described in any item methods.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Claim 1 to 7 described in any item methods are realized when execution.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810642836.XA CN109086303B (en) | 2018-06-21 | 2018-06-21 | Intelligent conversation method, device and terminal based on machine reading understanding |
PCT/CN2019/070350 WO2019242297A1 (en) | 2018-06-21 | 2019-01-04 | Method for intelligent dialogue based on machine reading comprehension, device, and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810642836.XA CN109086303B (en) | 2018-06-21 | 2018-06-21 | Intelligent conversation method, device and terminal based on machine reading understanding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109086303A true CN109086303A (en) | 2018-12-25 |
CN109086303B CN109086303B (en) | 2021-09-28 |
Family
ID=64840084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810642836.XA Active CN109086303B (en) | 2018-06-21 | 2018-06-21 | Intelligent conversation method, device and terminal based on machine reading understanding |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109086303B (en) |
WO (1) | WO2019242297A1 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109753661A (en) * | 2019-01-11 | 2019-05-14 | 国信优易数据有限公司 | A kind of machine reads understanding method, device, equipment and storage medium |
CN109766423A (en) * | 2018-12-29 | 2019-05-17 | 上海智臻智能网络科技股份有限公司 | Answering method and device neural network based, storage medium, terminal |
CN109918560A (en) * | 2019-01-09 | 2019-06-21 | 平安科技(深圳)有限公司 | A kind of answering method and device based on search engine |
CN110134967A (en) * | 2019-05-22 | 2019-08-16 | 北京金山数字娱乐科技有限公司 | Text handling method, calculates equipment and computer readable storage medium at device |
CN110287290A (en) * | 2019-06-26 | 2019-09-27 | 平安科技(深圳)有限公司 | Based on marketing clue extracting method, device and the computer readable storage medium for reading understanding |
CN110399472A (en) * | 2019-06-17 | 2019-11-01 | 平安科技(深圳)有限公司 | Reminding method, device, computer equipment and storage medium are putd question in interview |
CN110442691A (en) * | 2019-07-04 | 2019-11-12 | 平安科技(深圳)有限公司 | Machine reads the method, apparatus and computer equipment for understanding Chinese |
WO2019242297A1 (en) * | 2018-06-21 | 2019-12-26 | 深圳壹账通智能科技有限公司 | Method for intelligent dialogue based on machine reading comprehension, device, and terminal |
CN111291841A (en) * | 2020-05-13 | 2020-06-16 | 腾讯科技(深圳)有限公司 | Image recognition model training method and device, computer equipment and storage medium |
CN111814466A (en) * | 2020-06-24 | 2020-10-23 | 平安科技(深圳)有限公司 | Information extraction method based on machine reading understanding and related equipment thereof |
CN111813989A (en) * | 2020-07-02 | 2020-10-23 | 中国联合网络通信集团有限公司 | Information processing method, device and storage medium |
CN111813961A (en) * | 2020-08-25 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Data processing method and device based on artificial intelligence and electronic equipment |
CN112445887A (en) * | 2019-08-29 | 2021-03-05 | 南京大学 | Method and device for realizing machine reading understanding system based on retrieval |
CN113239165A (en) * | 2021-05-17 | 2021-08-10 | 山东新一代信息产业技术研究院有限公司 | Reading understanding method and system based on cloud robot and storage medium |
CN113300813A (en) * | 2021-05-27 | 2021-08-24 | 中南大学 | Attention-based combined source channel method for text |
CN113505219A (en) * | 2021-06-15 | 2021-10-15 | 北京三快在线科技有限公司 | Text processing method and device, electronic equipment and computer readable storage medium |
CN114638365A (en) * | 2022-05-17 | 2022-06-17 | 之江实验室 | Machine reading understanding reasoning method and device, electronic equipment and storage medium |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111428467B (en) * | 2020-02-19 | 2024-05-07 | 平安科技(深圳)有限公司 | Method, device, equipment and storage medium for generating problem questions for reading and understanding |
CN111753521B (en) * | 2020-06-28 | 2023-03-28 | 深圳壹账通智能科技有限公司 | Reading understanding method based on artificial intelligence and related equipment |
CN112163405B (en) * | 2020-09-08 | 2024-08-06 | 北京百度网讯科技有限公司 | Method and device for generating problems |
CN112464643B (en) * | 2020-11-26 | 2022-11-15 | 广州视源电子科技股份有限公司 | Machine reading understanding method, device, equipment and storage medium |
CN112764784B (en) * | 2021-02-03 | 2022-10-11 | 河南工业大学 | Automatic software defect repairing method and device based on neural machine translation |
CN113239148B (en) * | 2021-05-14 | 2022-04-05 | 电子科技大学 | Scientific and technological resource retrieval method based on machine reading understanding |
CN113535918B (en) * | 2021-07-14 | 2022-09-09 | 梁晨 | Pre-training dual attention neural network semantic inference dialogue retrieval method and system, retrieval equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354714A (en) * | 2008-09-09 | 2009-01-28 | 浙江大学 | Method for recommending problem based on probability latent semantic analysis |
CN106484664A (en) * | 2016-10-21 | 2017-03-08 | 竹间智能科技(上海)有限公司 | Similarity calculating method between a kind of short text |
CN106570708A (en) * | 2016-10-31 | 2017-04-19 | 厦门快商通科技股份有限公司 | Management method and management system of intelligent customer service knowledge base |
CN106844368A (en) * | 2015-12-03 | 2017-06-13 | 华为技术有限公司 | For interactive method, nerve network system and user equipment |
US20170230399A1 (en) * | 2016-02-09 | 2017-08-10 | International Business Machines Corporation | Forecasting and classifying cyber attacks using neural embeddings migration |
CN107220296A (en) * | 2017-04-28 | 2017-09-29 | 北京拓尔思信息技术股份有限公司 | The generation method of question and answer knowledge base, the training method of neutral net and equipment |
CN107562792A (en) * | 2017-07-31 | 2018-01-09 | 同济大学 | A kind of question and answer matching process based on deep learning |
CN108021705A (en) * | 2017-12-27 | 2018-05-11 | 中科鼎富(北京)科技发展有限公司 | A kind of answer generation method and device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106776562B (en) * | 2016-12-20 | 2020-07-28 | 上海智臻智能网络科技股份有限公司 | Keyword extraction method and extraction system |
CN108170816B (en) * | 2017-12-31 | 2020-12-08 | 厦门大学 | Intelligent visual question-answering method based on deep neural network |
CN109086303B (en) * | 2018-06-21 | 2021-09-28 | 深圳壹账通智能科技有限公司 | Intelligent conversation method, device and terminal based on machine reading understanding |
-
2018
- 2018-06-21 CN CN201810642836.XA patent/CN109086303B/en active Active
-
2019
- 2019-01-04 WO PCT/CN2019/070350 patent/WO2019242297A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354714A (en) * | 2008-09-09 | 2009-01-28 | 浙江大学 | Method for recommending problem based on probability latent semantic analysis |
CN106844368A (en) * | 2015-12-03 | 2017-06-13 | 华为技术有限公司 | For interactive method, nerve network system and user equipment |
US20170230399A1 (en) * | 2016-02-09 | 2017-08-10 | International Business Machines Corporation | Forecasting and classifying cyber attacks using neural embeddings migration |
CN106484664A (en) * | 2016-10-21 | 2017-03-08 | 竹间智能科技(上海)有限公司 | Similarity calculating method between a kind of short text |
CN106570708A (en) * | 2016-10-31 | 2017-04-19 | 厦门快商通科技股份有限公司 | Management method and management system of intelligent customer service knowledge base |
CN107220296A (en) * | 2017-04-28 | 2017-09-29 | 北京拓尔思信息技术股份有限公司 | The generation method of question and answer knowledge base, the training method of neutral net and equipment |
CN107562792A (en) * | 2017-07-31 | 2018-01-09 | 同济大学 | A kind of question and answer matching process based on deep learning |
CN108021705A (en) * | 2017-12-27 | 2018-05-11 | 中科鼎富(北京)科技发展有限公司 | A kind of answer generation method and device |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019242297A1 (en) * | 2018-06-21 | 2019-12-26 | 深圳壹账通智能科技有限公司 | Method for intelligent dialogue based on machine reading comprehension, device, and terminal |
CN109766423A (en) * | 2018-12-29 | 2019-05-17 | 上海智臻智能网络科技股份有限公司 | Answering method and device neural network based, storage medium, terminal |
CN109918560A (en) * | 2019-01-09 | 2019-06-21 | 平安科技(深圳)有限公司 | A kind of answering method and device based on search engine |
CN109918560B (en) * | 2019-01-09 | 2024-03-12 | 平安科技(深圳)有限公司 | Question and answer method and device based on search engine |
CN109753661B (en) * | 2019-01-11 | 2022-12-02 | 国信优易数据股份有限公司 | Machine reading understanding method, device, equipment and storage medium |
CN109753661A (en) * | 2019-01-11 | 2019-05-14 | 国信优易数据有限公司 | A kind of machine reads understanding method, device, equipment and storage medium |
CN110134967A (en) * | 2019-05-22 | 2019-08-16 | 北京金山数字娱乐科技有限公司 | Text handling method, calculates equipment and computer readable storage medium at device |
CN110399472A (en) * | 2019-06-17 | 2019-11-01 | 平安科技(深圳)有限公司 | Reminding method, device, computer equipment and storage medium are putd question in interview |
CN110399472B (en) * | 2019-06-17 | 2022-07-15 | 平安科技(深圳)有限公司 | Interview question prompting method and device, computer equipment and storage medium |
CN110287290A (en) * | 2019-06-26 | 2019-09-27 | 平安科技(深圳)有限公司 | Based on marketing clue extracting method, device and the computer readable storage medium for reading understanding |
CN110442691A (en) * | 2019-07-04 | 2019-11-12 | 平安科技(深圳)有限公司 | Machine reads the method, apparatus and computer equipment for understanding Chinese |
CN112445887A (en) * | 2019-08-29 | 2021-03-05 | 南京大学 | Method and device for realizing machine reading understanding system based on retrieval |
CN112445887B (en) * | 2019-08-29 | 2024-05-03 | 南京大学 | Method and device for realizing machine reading understanding system based on retrieval |
CN111291841B (en) * | 2020-05-13 | 2020-08-21 | 腾讯科技(深圳)有限公司 | Image recognition model training method and device, computer equipment and storage medium |
CN111291841A (en) * | 2020-05-13 | 2020-06-16 | 腾讯科技(深圳)有限公司 | Image recognition model training method and device, computer equipment and storage medium |
CN111814466A (en) * | 2020-06-24 | 2020-10-23 | 平安科技(深圳)有限公司 | Information extraction method based on machine reading understanding and related equipment thereof |
CN111814466B (en) * | 2020-06-24 | 2024-09-13 | 平安科技(深圳)有限公司 | Information extraction method based on machine reading understanding and related equipment thereof |
WO2021135910A1 (en) * | 2020-06-24 | 2021-07-08 | 平安科技(深圳)有限公司 | Machine reading comprehension-based information extraction method and related device |
CN111813989B (en) * | 2020-07-02 | 2023-07-18 | 中国联合网络通信集团有限公司 | Information processing method, apparatus and storage medium |
CN111813989A (en) * | 2020-07-02 | 2020-10-23 | 中国联合网络通信集团有限公司 | Information processing method, device and storage medium |
CN111813961A (en) * | 2020-08-25 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Data processing method and device based on artificial intelligence and electronic equipment |
CN113239165A (en) * | 2021-05-17 | 2021-08-10 | 山东新一代信息产业技术研究院有限公司 | Reading understanding method and system based on cloud robot and storage medium |
CN113300813B (en) * | 2021-05-27 | 2022-08-30 | 中南大学 | Attention-based combined source and channel method for text |
CN113300813A (en) * | 2021-05-27 | 2021-08-24 | 中南大学 | Attention-based combined source channel method for text |
CN113505219A (en) * | 2021-06-15 | 2021-10-15 | 北京三快在线科技有限公司 | Text processing method and device, electronic equipment and computer readable storage medium |
CN114638365B (en) * | 2022-05-17 | 2022-09-06 | 之江实验室 | Machine reading understanding reasoning method and device, electronic equipment and storage medium |
CN114638365A (en) * | 2022-05-17 | 2022-06-17 | 之江实验室 | Machine reading understanding reasoning method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109086303B (en) | 2021-09-28 |
WO2019242297A1 (en) | 2019-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109086303A (en) | The Intelligent dialogue method, apparatus understood, terminal are read based on machine | |
CN111930940B (en) | Text emotion classification method and device, electronic equipment and storage medium | |
US11373047B2 (en) | Method, system, and computer program for artificial intelligence answer | |
CN107341145B (en) | A kind of user feeling analysis method based on deep learning | |
US20160162569A1 (en) | Methods and systems for improving machine learning performance | |
US20170351663A1 (en) | Iterative alternating neural attention for machine reading | |
CN109947919A (en) | Method and apparatus for generating text matches model | |
CN112084789B (en) | Text processing method, device, equipment and storage medium | |
CN111368548A (en) | Semantic recognition method and device, electronic equipment and computer-readable storage medium | |
CN110909145B (en) | Training method and device for multi-task model | |
CN109635094B (en) | Method and device for generating answer | |
CN110619050B (en) | Intention recognition method and device | |
CN111428010A (en) | Man-machine intelligent question and answer method and device | |
KR20240116864A (en) | Enrich machine learning language models using search engine results | |
US20230094730A1 (en) | Model training method and method for human-machine interaction | |
CN116541493A (en) | Interactive response method, device, equipment and storage medium based on intention recognition | |
CN109858045A (en) | Machine translation method and device | |
CN111428011B (en) | Word recommendation method, device, equipment and storage medium | |
CN112287085A (en) | Semantic matching method, system, device and storage medium | |
CN112906381B (en) | Dialog attribution identification method and device, readable medium and electronic equipment | |
CN111444321B (en) | Question answering method, device, electronic equipment and storage medium | |
CN111008213A (en) | Method and apparatus for generating language conversion model | |
CN109002498B (en) | Man-machine conversation method, device, equipment and storage medium | |
CN113722436A (en) | Text information extraction method and device, computer equipment and storage medium | |
CN117131273A (en) | Resource searching method, device, computer equipment, medium and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |