JPH11237894A - Method and device for comprehending language - Google Patents
Method and device for comprehending languageInfo
- Publication number
- JPH11237894A JPH11237894A JP10037837A JP3783798A JPH11237894A JP H11237894 A JPH11237894 A JP H11237894A JP 10037837 A JP10037837 A JP 10037837A JP 3783798 A JP3783798 A JP 3783798A JP H11237894 A JPH11237894 A JP H11237894A
- Authority
- JP
- Japan
- Prior art keywords
- dialogue context
- candidate
- candidates
- priority
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Machine Translation (AREA)
Abstract
Description
【0001】[0001]
【発明の属する技術分野】本発明は、入力発語の言語理
解を行い対話文脈候補を出力する言語理解方法及び言語
理解装置に関し、詳しくは、たとえば人間が日常使って
いる言葉を、音声認識装置を介して文字列として計算機
に入力し、その文字列が意味する内容を計算機処理する
音声対話システム等に利用される言語理解方法及び言語
理解装置に関するものである。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a language comprehension method and a language comprehension device for comprehending the language of an input utterance and outputting a dialogue context candidate. The present invention relates to a language understanding method and a language understanding device used in a voice dialogue system or the like which inputs a character string as a character string to a computer through a computer and processes the meaning of the character string by computer.
【0002】[0002]
【従来の技術】従来の言語理解技術は、文または発話の
切れ目が明らかなものを対象にしていた。書き言葉の解
析であれば句点を文の区切りとし、句点から句点までの
間のテキストを言語理解規則を用いて解析し、その意味
内容を計算し、それと、それより前の文から計算された
文脈情報とあわせて、新しい文脈情報を計算する方法が
用いられていた。話し言葉の解析の場合は、ユーザに発
話時にボタン等を押させることによって発話区間を調べ
るか、または、ポーズをもって発話の区切りとし、発話
終了後に直前の発話区間の言語解析を行い、その意味内
容を計算し、文脈情報を計算する方法が用いられてい
た。2. Description of the Related Art Conventional language understanding techniques have been applied to those in which a break in a sentence or utterance is clear. When analyzing written words, use punctuation marks as sentence breaks, analyze the text between punctuation marks using language understanding rules, calculate their semantic contents, and the context calculated from the preceding sentence. A method of calculating new contextual information along with the information was used. In the case of analysis of spoken language, the user can press a button or the like at the time of uttering to check the utterance section, or use a pause to separate the utterance, perform a linguistic analysis of the immediately preceding utterance section after ending the utterance, and determine the semantic content. A method of calculating and calculating context information was used.
【0003】[0003]
【発明が解決しようとする課題】話し言葉では文という
単位が明確ではない。単語1つで意図を表すこともある
し、長い発話を話すこともある。ポーズなどの区切りを
用いるとすると、ポーズが現れるまで入力の単位が決定
されないため、発話の意味の把握が行えず高速な反応が
できない。また、ユーザが多くの事柄を切目なく話す場
合もあり、その途中で意味を把握し、相槌をうったり問
い返しを行うなど、適切に反応しなくてはならない場合
もある。連続した発声の中から、言語理解規則を用いて
発話区切りを見つけることを試みても、規則の適用の仕
方は曖昧で様々な意味理解が可能であるため、どれを用
いればよいのか不明であるという問題がある。In spoken language, the unit of sentence is not clear. Sometimes a single word indicates intent, and sometimes a long utterance is spoken. If a break such as a pause is used, the input unit is not determined until the pause appears, so that the meaning of the utterance cannot be grasped and a high-speed reaction cannot be performed. Further, there are cases where the user speaks a lot of matters without interruption, and in the middle of that, there is also a case where the user has to react appropriately such as comprehending the meaning and responding to each other, such as asking and asking a question. Even if you try to find utterance breaks using language comprehension rules from continuous utterances, the rules are ambiguous and can be understood in various ways, so it is unclear which one to use There is a problem.
【0004】本発明の目的は、発語の区切りが明確にわ
からなくても、音声の理解を実時間で行うことを可能と
する言語理解方法及び装置を提供することにある。[0004] It is an object of the present invention to provide a language understanding method and apparatus capable of realizing speech understanding in real time even if the utterance break is not clearly understood.
【0005】[0005]
【課題を解決するための手段】本発明の言語解理方法
は、単語が入力される度に、辞書を参照して当該単語の
意味記述を求め、その前の時点までに得られている対話
文脈候補の各々とあわせて新たな対話文脈候補を求め、
これらの対話文脈候補に言語理解規則を適用して、更に
新たな対話文脈候補を得て、この得られた対話文脈候補
を元の対話文脈候補とあわせて優先度を計算して、最も
優先度の高い対話文脈候補を出力することを特徴とす
る。According to the language solving method of the present invention, every time a word is input, a meaning description of the word is obtained by referring to a dictionary, and the dialogue obtained up to that time is obtained. A new dialogue context candidate is determined along with each of the context candidates,
A language understanding rule is applied to these dialogue context candidates to obtain new dialogue context candidates, and the obtained dialogue context candidates are combined with the original dialogue context candidates to calculate a priority. The feature is to output a dialogue context candidate having a high probability.
【0006】また、本発明の数語解理装置は、前の時点
までに得られている対話文脈候補を保持する手段と、単
語が入力されると、辞書を参照して当該単語の意味記述
を求め、その前の時点までに得られている対話文脈候補
の各々とあわせて新たな対話文脈候補を求める手段と、
この求めた対話文脈候補に言語理解規則を適用して、更
に新たな対話文脈候補を得る手段と、この得られた対話
文脈候補を元の対話文脈候補とあわせて優先度を計算す
る手段とを具備し、これら手段の処理を単語が入力され
る度に繰り返し、その時点ごとに最も優先度の高い対話
文脈候補を出力することを特徴とする。[0006] Further, the number word interpreting apparatus of the present invention comprises means for holding dialogue context candidates obtained up to a previous time point, and when a word is input, a semantic description of the word by referring to a dictionary. Means for determining a new dialog context candidate along with each of the dialog context candidates obtained up to that point;
A means for applying a language understanding rule to the obtained dialogue context candidate to obtain a new dialogue context candidate, and a means for calculating a priority by combining the obtained dialogue context candidate with the original dialogue context candidate. It is characterized in that the processing of these means is repeated every time a word is input, and a dialogue context candidate with the highest priority is output at each time.
【0007】[0007]
【発明の実施の形態】図1は、本発明にかかる言語理解
方法の一実施例の処理フロー図を示す、なお、これは所
謂計算機の支援のもとに実行される。単語が入力される
と(ステップ110)、まず、辞書(単語辞書)を参照
して、入力された単語の意味記述(品詞、意味分類等)
を求める(ステップ120)。次に、この意味記述を、
その前の単語入力の時点までに得られている対話文脈候
補の各々とあわせることにより、新たな対話文脈候補の
組み合わせを求める(ステップ130)。その後、これ
らの対話文脈候補について言語理解規則を適用し、その
結果について、更に新たな対話文脈候補の組み合せを求
める(ステップ140)。そして、これら新たな対話文
脈候補を元の対話文脈候補とあわせて優先度を計算し
(ステップ150)、最も優先度の高い対話文脈候補を
出力する(ステップ160)。以下、単語が入力される
度に、ステップ110〜160の処理を繰り返し実行す
る。FIG. 1 shows a processing flow diagram of an embodiment of a language understanding method according to the present invention, which is executed with the aid of a so-called computer. When a word is input (step 110), first, referring to a dictionary (word dictionary), a description of the meaning of the input word (part of speech, semantic classification, etc.)
(Step 120). Next, this semantic description is
A new combination of dialogue context candidates is obtained by combining with each of the dialogue context candidates obtained up to the time of the preceding word input (step 130). Thereafter, language understanding rules are applied to these dialogue context candidates, and a new combination of dialogue context candidates is obtained from the result (step 140). Then, the priority is calculated by combining these new dialogue context candidates with the original dialogue context candidates (step 150), and the dialogue context candidate having the highest priority is output (step 160). Hereinafter, every time a word is input, the processing of steps 110 to 160 is repeatedly executed.
【0008】以上のように、本発明の言語理解方法にお
いては、入力発話に対する言語理解規則の適用の仕方が
複数ある場合に、その各々の方法で適用した結果得られ
る複数の対話文脈の候補を保持し、それらの優先度を計
算した上で、最も優先度の高いものを調べて出力する。
これにより、発話の区切りが明確にわからなくても、各
時点で最も優先度の高い文脈を得ることが可能であるた
め、実時間で出力することが可能である。As described above, in the language understanding method of the present invention, when there are a plurality of ways of applying the language understanding rules to the input utterance, a plurality of candidates for the dialogue context obtained as a result of applying each method are determined. After storing these and calculating their priorities, the highest priority one is checked and output.
As a result, even if the utterance break is not clearly understood, it is possible to obtain the context with the highest priority at each point in time, so that it is possible to output in real time.
【0009】次に、図2に、本発明にかかる言語理解装
置の一実施例の構成図を示す。本言語理解装置200は
対話文脈管理部210、規則適用部220、優先度計算
部230、対話文脈候補保持部240、辞書(単語辞
書)250及び言語理解規則260よりなる。対話文脈
候補保持部240は、前の時点までに得られた対話文脈
候補を保持している。対話文脈管理部210は、単語が
入力されると、辞書250を参照して、該入力された単
語の意味記述を求め、対話文脈候補保持部240に保持
されている前の単語入力の時点までに得られた対話文脈
候補の各々と合わせることにより、新たな対話文脈候補
の組み合わせを求め、規則適用部220に送る。規則適
用部220は、対話文脈管理部210から送られた対話
文脈候補の各々に言語理解規則260の適用を試み、そ
れが成功した場合、その結果として新たな対話文脈候補
を得て、これら得られた対話文脈候補を元の対話文脈候
補とあわせて優先度計算部230に送る。優先度計算部
230は、規則適用部220から送られた対話文脈候補
の各々の優先度を計算し、対話文脈管理部210に戻
す。対話文脈管理部210は、優先度度計算部230か
ら戻ってきた優先度の付された各対話文脈候補を対話文
脈候補保持部240に書き込むとともに、その時点で最
も優先度の高い対話文脈候補を出力する。Next, FIG. 2 shows a configuration diagram of an embodiment of the language understanding device according to the present invention. The language understanding device 200 includes a dialogue context management unit 210, a rule application unit 220, a priority calculation unit 230, a dialogue context candidate holding unit 240, a dictionary (word dictionary) 250, and a language understanding rule 260. The conversation context candidate holding unit 240 holds the conversation context candidates obtained up to the previous time. When a word is input, the dialogue context management unit 210 obtains a semantic description of the input word by referring to the dictionary 250, and obtains the meaning description of the input word until the time of the previous word input held in the dialogue context candidate holding unit 240. By combining with each of the dialogue context candidates obtained in (1), a new combination of dialogue context candidates is obtained and sent to the rule application unit 220. The rule application unit 220 attempts to apply the language understanding rule 260 to each of the dialogue context candidates sent from the dialogue context management unit 210, and if it succeeds, obtains new dialogue context candidates as a result. The obtained dialogue context candidate is sent to the priority calculation unit 230 together with the original dialogue context candidate. The priority calculation unit 230 calculates the priority of each of the dialog context candidates sent from the rule application unit 220 and returns the priority to the dialog context management unit 210. The conversation context management unit 210 writes each of the conversation context candidates with the priority returned from the priority calculation unit 230 into the conversation context candidate holding unit 240, and stores the conversation context candidate with the highest priority at that time. Output.
【0010】図3に、一例として本言語理解装置200
を適用した音声対話システムの構成図を示す。図3にお
いて、処理対象となる音声が音声認識装置300に入力
されると、音声中の単語が認識される毎に、言語理解装
置200に送られる。言語理解装置200では、図2に
示したように、単語が入力される度に、辞書250と言
語理解規則260を用いて対話文脈を得て逐次出力し、
応答計画装置400に送る。応答計画装置400は、文
脈情報に応じて、応答が必要かどうかを判断し、必要な
らば適当な応答を計画し、応答メッセージを音声合成装
置500を用いて生成して出力する。FIG. 3 shows an example of the present language understanding device 200.
1 is a configuration diagram of a voice interaction system to which the present invention is applied. In FIG. 3, when a speech to be processed is input to the speech recognition apparatus 300, the speech is sent to the language understanding apparatus 200 every time a word in the speech is recognized. In the language understanding device 200, as shown in FIG. 2, every time a word is input, a dialogue context is obtained using the dictionary 250 and the language understanding rule 260, and is sequentially output.
This is sent to the response planning device 400. The response planning device 400 determines whether or not a response is required according to the context information, plans an appropriate response if necessary, and generates and outputs a response message using the speech synthesis device 500.
【0011】ここで、具体例として、会議室を予約する
対話中の「第一会議室、2時30分」(以下発話1)と
いう入力の理解を考える。音声認識装置300は音声入
力のなかから、「第一会議室、2時、30分」という3
つの単語を認識する度に、逐次的に言語理解装置200
に送る。言語理解装置200内の辞書250、言語理解
規則260は、それぞれ図4、図5のようになっている
とする。図5の一行ごとが言語理解規則に対応する。た
とえば、 発話(会議室設定)←名詞(会議室) は、意味分類が「会議室」であるような名詞が、「会議
室設定」を命令する発話であると解釈できることを示
す。また、 名詞(時間)←名詞(時間)名詞(分) は、意味分類が「時間」であるような名詞と、意味分類
が「分」であるような名詞の連続が、意味分類が「時
間」であるような名詞と解釈できることを示す。Here, as a specific example, an understanding of an input of "first conference room, 2:30" (hereinafter, utterance 1) during a dialogue for reserving a conference room will be considered. The voice recognition device 300 outputs a message “1st meeting room, 2:30, 30 minutes” from the voice input.
Each time one word is recognized, the language understanding device 200
Send to It is assumed that the dictionary 250 and the language understanding rule 260 in the language understanding device 200 are as shown in FIGS. 4 and 5, respectively. Each line in FIG. 5 corresponds to a language understanding rule. For example, utterance (meeting room setting) ← noun (meeting room) indicates that a noun whose semantic classification is “meeting room” can be interpreted as an utterance instructing “meeting room setting”. A noun (time) ← noun (time) noun (minute) is a sequence of nouns whose semantic classification is “time” and a series of nouns whose semantic classification is “minute”. "Indicates that it can be interpreted as a noun.
【0012】まず、「第一会議室」という単語が入力さ
れると、対話文脈管理部210において、次のような対
話文脈候補(1)が形成される。なお、対話文脈候補保
持部240は、この時点では空とする。 (1)優先度:0 理解記録:名詞(会議室,第一会議室) 対話文脈候補は、優先度と理解記録からなり、優先度の
初期値は0であるとする。理解記録には今までの対話の
理解の状態が記録される。ここでは、図4の「第一会議
室」の辞書項目から、意味記述「名詞(会議室,第一会
議室)」が求められ、記録される。First, when the word "first conference room" is input, the dialogue context management unit 210 forms the following dialogue context candidate (1). Note that the conversation context candidate holding unit 240 is empty at this time. (1) Priority: 0 Understanding record: Noun (meeting room, first meeting room) A dialogue context candidate is composed of a priority and an understanding record, and the initial value of the priority is 0. The comprehension record records the state of comprehension of the conversation so far. Here, a semantic description “noun (meeting room, first meeting room)” is obtained from the dictionary item of “first meeting room” in FIG. 4 and recorded.
【0013】次に、対話文脈候補(1)が、規則適用部
220に送られる。規則適用部220では、図5の言語
理解規則を対話文脈候補(1)の理解記録に適用する。
この場合、図5の規則(A)が適用でき、次のような新
たな対話文脈候補(2),(3)ができる。Next, the dialogue context candidate (1) is sent to the rule application unit 220. The rule application unit 220 applies the language understanding rule of FIG. 5 to the understanding record of the dialogue context candidate (1).
In this case, the rule (A) in FIG. 5 can be applied, and the following new dialog context candidates (2) and (3) can be obtained.
【0014】(2)優先度:0 理解記録:名詞(会議室,第一会議室) (3)優先度:0 理解記録:発話(会議室設定,第一会議室) 対話文脈候補(2)は、対話文脈候補(1)に規則を何
も適用しなかったものであり、対話文脈候補(3)が図
5の規則(A)の適用に成功したものである。(2) Priority: 0 Understanding record: Noun (meeting room, first meeting room) (3) Priority: 0 Understanding record: utterance (meeting room setting, first meeting room) Dialogue context candidate (2) Is a rule in which no rule is applied to the dialogue context candidate (1), and a dialogue context candidate (3) successfully applies the rule (A) in FIG.
【0015】これら二つの対話文脈候補(2),(3)
が優先度計算部230に送られる。ここで、規則を適用
した対話文脈候補(3)の方の優先度が増やされ、次の
(3)′のようになる。 (3)′優先度:10 理解記録:発話(会議室設定,第一会議室) 優先度は、使われる規則と、そのときの理解記録の状態
によるとする。本例では、規則が1つ適用される度に、
「10」増やされるとする。この対話文脈候補(3)′
が対話文脈管理部210より応答計画装置400に送ら
れる。These two dialogue context candidates (2) and (3)
Is sent to the priority calculation unit 230. Here, the priority of the dialogue context candidate (3) to which the rule is applied is increased, and the following (3) 'is obtained. (3) 'Priority: 10 Understanding record: utterance (meeting room setting, first meeting room) The priority depends on the rule used and the state of the understanding record at that time. In this example, each time one rule is applied,
Assume that it is increased by "10". This dialogue context candidate (3) '
Is sent from the dialogue context management unit 210 to the response planning device 400.
【0016】次に、「2時」が入力されたときの対話文
脈管理部210の対話文脈候補は、次のようになる。 (4)優先度:0 理解記録:名詞(会議室,第一会議室),名詞(時間,
2時) (5)優先度:10 理解記録:発話(会議室設定,第一会議室),名詞(時
間,2時)。Next, the dialogue context candidates of the dialogue context management unit 210 when "2 o'clock" is input are as follows. (4) Priority: 0 Understanding record: Noun (meeting room, first meeting room), noun (time,
(2) (5) Priority: 10 Understanding record: utterance (meeting room setting, first meeting room), noun (time, 2:00).
【0017】これら対話文脈候補(4),(5)は規則
適用部220、優先度計算部230を経て次のようにな
る。 (6)優先度:10 理解記録:名詞(会議室,第一会議室),発話(時間設
定,2時) (7)優先度:10 理解記録:発話(会議室設定,第一会議室),名詞(時
間,2時) (8)優先度:20 理解記録:発話(会議室設定,第一会議室),発話(時
間設定,2時) 対話文脈候補(6)は、対話文脈候補(4)に図5の規
則(B)を適用したものである。対話文脈候補(7)
は、対話文脈候補(5)に何も規則を適用しなかったも
のであり、対話文脈候補(8)は、対話文脈候補(5)
それに図5の規則(B)を適用したものである。対話文
脈候補(4)に何も規則を適用しなかったものは、優先
度が低いので捨てられる。これらの結果、対話文脈候補
(8)が対話文脈管理部210より応答計画装置400
に送られる。The dialog context candidates (4) and (5) are as follows through the rule application unit 220 and the priority calculation unit 230. (6) Priority: 10 Understanding record: Noun (meeting room, first meeting room), utterance (time setting, 2 o'clock) (7) Priority: 10 Understanding record: uttering (meeting room setting, first meeting room) , Noun (time, 2 o'clock) (8) Priority: 20 Understanding record: utterance (meeting room setting, first meeting room), utterance (time setting, 2:00) The dialogue context candidate (6) is a dialogue context candidate ( The rule (B) of FIG. 5 is applied to 4). Dialogue context candidate (7)
Is a rule in which no rule is applied to the dialogue context candidate (5), and the dialogue context candidate (8) is a dialogue context candidate (5).
In addition, the rule (B) of FIG. 5 is applied. Those to which no rule is applied to the dialogue context candidate (4) are discarded because they have low priority. As a result, the dialogue context candidate (8) is sent from the dialogue context management unit 210 to the response planning device 400.
Sent to
【0018】次に、「30分」が入力された時、対話文
脈管理部210では次のような対話文脈候補(9),
(10),(11)が形成される。 (9)優先度:10 理解記録:発話(会議室設定,第一会議室),名詞(時
間,2時),名詞(分,30分) (10)優先度:10 理解記録:名詞(会議室,第一会議室),発話(時間設
定,2時),名詞(分,30分) (11)優先度:20 理解記録:発話(会議室設定,第一会議室),発話(時
間設定,2時),名詞(分,30分)。Next, when "30 minutes" is input, the dialogue context management unit 210 makes the following dialogue context candidates (9),
(10) and (11) are formed. (9) Priority: 10 Understanding record: utterance (meeting room setting, first meeting room), noun (time, 2 o'clock), noun (minute, 30 minutes) (10) Priority: 10 understanding record: noun (meeting Room, first meeting room), utterance (time setting, 2:00), noun (minute, 30 minutes) (11) Priority: 20 Understanding record: utterance (meeting room setting, first meeting room), utterance (time setting) , 2 o'clock), nouns (min, 30 min).
【0019】これらの対話文脈候補(9),(10),
(11)は規則適用部220、優先度計算部230を経
て次のようになる。 (12)優先度:20 理解記録:発話(会議室設定,第一会議室),発話(時
間設定,2時),名詞(分,30分) (13)優先度:20 理解記録:発話(会議室設定,第一会議室),名詞(時
間,2時30分) (14)優先度:30 理解記録:発話(会議室設定,第一会議室),発話(時間
設定,2時30分)ここで、対話文脈候補(12)は、対
話文脈候補(11)に何も規則を適用しなかったもので
ある。対話文脈候補(13)は、対話文脈候補(9)に
図5の規則(C)を適用したものであり、対話文脈候補
(14)は、それに、図5の規則(B)を適用したもの
である。これらの結果、対話文脈候補(14)が対話文
脈管理部21より応答計画装置に送られる。These dialogue context candidates (9), (10),
(11) is as follows through the rule application unit 220 and the priority calculation unit 230. (12) Priority: 20 Understanding record: utterance (meeting room setting, first meeting room), utterance (time setting, 2:00), noun (minute, 30 minutes) (13) Priority: 20 Understanding record: utterance ( Meeting room setting, first meeting room), noun (time, 2:30) (14) Priority: 30 Understanding record: utterance (meeting room setting, first meeting room), utterance (time setting, 2:30) Here, the dialogue context candidate (12) is obtained by applying no rules to the dialogue context candidate (11). The dialogue context candidate (13) is obtained by applying the rule (C) of FIG. 5 to the dialogue context candidate (9), and the dialogue context candidate (14) is obtained by applying the rule (B) of FIG. It is. As a result, the dialogue context candidate (14) is sent from the dialogue context management unit 21 to the response planning device.
【0020】即ち、「2時」が入力されたあとは、時間
設定が「2時」である解釈が一番優先度が高いが、「3
0分」が入力されたあとは、時間設定が「2時30分」
である解釈が一番優先度が高くなる。このように逐次的
に優先度の高い解釈が変り得る。このようにして、各時
点で最も優先度の高い文脈が得られる。That is, after "2 o'clock" is input, the interpretation that the time setting is "2 o'clock" has the highest priority, but "3 o'clock" has the highest priority.
After "0 minutes" is input, the time setting is "2:30"
Is the highest priority. In this way, the interpretation with the higher priority may change sequentially. In this way, the context with the highest priority at each time point is obtained.
【0021】[0021]
【発明の効果】以上説明したように、本発明の言語理解
方法及び装置を用いることにより、発話の区切りが明確
ではない音声の理解を実時間で行うことができる。これ
により、自由なタイミングでユーザが話しても、応答が
必要な場合には、即座に応答ができる音声対話システム
等の作成が可能になる。As described above, by using the language understanding method and apparatus of the present invention, it is possible to understand in real time a speech whose utterance is not clearly separated. As a result, even if the user speaks at a free timing, if a response is required, it is possible to create a voice interaction system or the like that can immediately respond.
【図1】本発明の言語理解方法の一実施例の処理フロー
図である。FIG. 1 is a processing flowchart of an embodiment of a language understanding method according to the present invention.
【図2】本発明の言語理解装置の一実施例の構成図であ
る。FIG. 2 is a configuration diagram of one embodiment of a language understanding device of the present invention.
【図3】本発明の言語理解装置の適用例のシステム構成
図である。FIG. 3 is a system configuration diagram of an application example of the language understanding device of the present invention.
【図4】辞書の一例を示す図である。FIG. 4 is a diagram illustrating an example of a dictionary.
【図5】言語理解規則の一例を示す図である。FIG. 5 is a diagram illustrating an example of a language understanding rule.
210 対話文脈管理部 220 規則適用部 230 優先度計算部 240 対話文脈候補保持部 250 辞書 260 言語理解規則 210 Dialogue Context Manager 220 Rule Applyer 230 Priority Calculator 240 Dialogue Context Candidate Holder 250 Dictionary 260 Language Understanding Rules
Claims (2)
出力する言語理解方法であって、 単語が入力される度に、辞書を参照して当該単語の意味
記述を求め、その前の時点までに得られている対話文脈
候補の各々とあわせて新たな対話文脈候補を求め、これ
らの対話文脈候補に言語理解規則を適用して、更に新た
な対話文脈候補を得て、この得られた対話文脈候補を元
の対話文脈候補とあわせて優先度を計算して、最も優先
度の高い対話文脈候補を出力することを特徴とする言語
理解方法。1. A language understanding method for performing language understanding of an utterance and outputting a dialogue context candidate. Each time a word is input, a meaning description of the word is obtained by referring to a dictionary. A new dialogue context candidate is obtained along with each of the dialogue context candidates obtained up to the point in time, language understanding rules are applied to these dialogue context candidates, and further new dialogue context candidates are obtained. A language understanding method comprising: calculating a priority of a dialogue context candidate that has been combined with an original dialogue context candidate; and outputting a dialogue context candidate having the highest priority.
出力する言語理解装置であって、 前の時点までに得られている対話文脈候補を保持する手
段と、 単語が入力されると、辞書を参照して当該単語の意味記
述を求め、その前の時点までに得られている対話文脈候
補の各々とあわせて新たな対話文脈候補を求める手段
と、 前記求めた対話文脈候補に言語理解規則を適用して、更
に新たな対話文脈候補を得る手段と、 前記得られた対話文脈候補を元の対話文脈候補とあわせ
て優先度を計算する手段とを具備し、 各手段の処理を単語が入力される度に繰り返し、その時
点ごとに最も優先度の高い対話文脈候補を出力すること
を特徴とする言語理解装置。2. A language understanding device for performing language understanding of an utterance and outputting dialogue context candidates, comprising: means for holding dialogue context candidates obtained up to a previous time; Means for obtaining a semantic description of the word by referring to the dictionary, and obtaining new dialogue context candidates together with each of the dialogue context candidates obtained up to that time; Means for obtaining a new dialogue context candidate by applying an understanding rule; and means for calculating a priority by combining the obtained dialogue context candidate with the original dialogue context candidate. A language understanding apparatus characterized by repeating each time a word is input, and outputting a conversation context candidate having the highest priority at each time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP03783798A JP3525999B2 (en) | 1998-02-19 | 1998-02-19 | Language understanding method and language understanding device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP03783798A JP3525999B2 (en) | 1998-02-19 | 1998-02-19 | Language understanding method and language understanding device |
Publications (2)
Publication Number | Publication Date |
---|---|
JPH11237894A true JPH11237894A (en) | 1999-08-31 |
JP3525999B2 JP3525999B2 (en) | 2004-05-10 |
Family
ID=12508653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP03783798A Expired - Fee Related JP3525999B2 (en) | 1998-02-19 | 1998-02-19 | Language understanding method and language understanding device |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP3525999B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006171096A (en) * | 2004-12-13 | 2006-06-29 | Ntt Docomo Inc | Continuous input speech recognition device and continuous input speech recognizing method |
WO2012023450A1 (en) * | 2010-08-19 | 2012-02-23 | 日本電気株式会社 | Text processing system, text processing method, and text processing program |
CN106057200A (en) * | 2016-06-23 | 2016-10-26 | 广州亿程交通信息有限公司 | Semantic-based interaction system and interaction method |
-
1998
- 1998-02-19 JP JP03783798A patent/JP3525999B2/en not_active Expired - Fee Related
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006171096A (en) * | 2004-12-13 | 2006-06-29 | Ntt Docomo Inc | Continuous input speech recognition device and continuous input speech recognizing method |
WO2012023450A1 (en) * | 2010-08-19 | 2012-02-23 | 日本電気株式会社 | Text processing system, text processing method, and text processing program |
CN106057200A (en) * | 2016-06-23 | 2016-10-26 | 广州亿程交通信息有限公司 | Semantic-based interaction system and interaction method |
Also Published As
Publication number | Publication date |
---|---|
JP3525999B2 (en) | 2004-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10074363B2 (en) | Method and apparatus for keyword speech recognition | |
US11037553B2 (en) | Learning-type interactive device | |
EP2849177B1 (en) | System and method of text zoning | |
JP3454897B2 (en) | Spoken dialogue system | |
US6067514A (en) | Method for automatically punctuating a speech utterance in a continuous speech recognition system | |
JP5327054B2 (en) | Pronunciation variation rule extraction device, pronunciation variation rule extraction method, and pronunciation variation rule extraction program | |
JP3350293B2 (en) | Dialogue processing device and dialogue processing method | |
KR102390940B1 (en) | Context biasing for speech recognition | |
JP4729902B2 (en) | Spoken dialogue system | |
US20140350934A1 (en) | Systems and Methods for Voice Identification | |
WO2006054724A1 (en) | Voice recognition device and method, and program | |
Roddy et al. | Investigating speech features for continuous turn-taking prediction using lstms | |
Kopparapu | Non-linguistic analysis of call center conversations | |
CN114220461A (en) | Customer service call guiding method, device, equipment and storage medium | |
JP6712754B2 (en) | Discourse function estimating device and computer program therefor | |
US10248649B2 (en) | Natural language processing apparatus and a natural language processing method | |
Dyriv et al. | The user's psychological state identification based on Big Data analysis for person's electronic diary | |
Catania et al. | Automatic Speech Recognition: Do Emotions Matter? | |
US20190088258A1 (en) | Voice recognition device, voice recognition method, and computer program product | |
JP4220151B2 (en) | Spoken dialogue device | |
US6772116B2 (en) | Method of decoding telegraphic speech | |
JPH11237894A (en) | Method and device for comprehending language | |
Wolf | Speech recognition and understanding | |
Levow | Adaptations in spoken corrections: Implications for models of conversational speech | |
JP7028203B2 (en) | Speech recognition device, speech recognition method, program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20040210 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20040210 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20080227 Year of fee payment: 4 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090227 Year of fee payment: 5 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090227 Year of fee payment: 5 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20100227 Year of fee payment: 6 |
|
LAPS | Cancellation because of no payment of annual fees |