[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN1320482C - Natural voice pause in identification text strings - Google Patents

Natural voice pause in identification text strings Download PDF

Info

Publication number
CN1320482C
CN1320482C CNB031327087A CN03132708A CN1320482C CN 1320482 C CN1320482 C CN 1320482C CN B031327087 A CNB031327087 A CN B031327087A CN 03132708 A CN03132708 A CN 03132708A CN 1320482 C CN1320482 C CN 1320482C
Authority
CN
China
Prior art keywords
word
natural
text string
sounding
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB031327087A
Other languages
Chinese (zh)
Other versions
CN1604183A (en
Inventor
陈桂林
祖漪清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Priority to CNB031327087A priority Critical patent/CN1320482C/en
Priority to KR1020067006094A priority patent/KR20060056403A/en
Priority to EP04784433A priority patent/EP1668631A4/en
Priority to PCT/US2004/030570 priority patent/WO2005034085A1/en
Priority to RU2006114740/09A priority patent/RU2319221C1/en
Publication of CN1604183A publication Critical patent/CN1604183A/en
Application granted granted Critical
Publication of CN1320482C publication Critical patent/CN1320482C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention discloses a method (400) for automatically identifying a natural speech pause of a text string, and the natural speech pause is used for text-to-speech converion in an electronic apparatus (100). The method (400) comprises the steps that a text string (420) which comprises two ends is obtained, and the two ends comprises a starting end and an ending end; analysis step (440) is executed, namely that at least one word in the text string is analyzed so as to judge whether the natural speed pause exists beside the word, the analysis is based on at least one preset threshold value for the word, and the preset threshold value is associated with the number of syllables between the word and one of the two ends of the text string; insertion step (460) is provided so as to insert the natural speech pause into the synthetic speech signal output representation of the text string.

Description

The method that natural-sounding in the sign text string pauses
Technical field
The present invention relates generally to that literary composition language conversion (TTS) is synthetic.The present invention is particularly useful for pausing naturally in the synthetic language of determining text chunk.
Background technology
Literary composition language (TTS) conversion also is known as continuous text synthetic to voice usually, the text string that it allows electronic equipment to receive to import and the conversion of text string is provided with the form of synthetic speech after expression.But, will carry out the equipment of phonetic synthesis from the text string that receives of indefinite quantity for needs, it is very difficult that high-quality synthetic speech true to nature is provided.This is because each word that needs to synthesize or the language of syllable (for Chinese character and similar character) all are that context dependent is relevant with the position.For example, the language of sentence (text string of input) ending place word can elongate or prolong.If even the place that requires emphasis in natural-sounding pauses appears at the centre of sentence, the language of same word also can prolong.
In most of language, the language of word depends on the harmonious sounds parameter, and the harmonious sounds parameter comprises tone (pitch period), volume (power or amplitude) and duration.The prosodic parameter value of word depends on the position of word in phrase and the position of natural-sounding pause.But, in the synthetic prior art of literary composition language conversion (TTS) and be not easy to occur to be used to change the sign that the natural-sounding of input text pattern at random pauses.
In this instructions and claims, term " comprises (comprise) ", " comprising (comprising) " or other similar terms refer to comprising of nonexcludability, for example a kind of method or device that comprises a series of unit, it not only comprises the unit that those are listed separately, also may comprise the unit that those are not listed well.
Summary of the invention
According to an aspect of the present invention, provide a kind of method that Automatic Logos text string natural-sounding pauses that is used for, the literary composition language that this pause is used for carrying out on electronic equipment is changed, and this method comprises:
Obtain the text string that comprises two ends, these two ends are starting ends and finish end;
Whether at least one word of analyzing in the text string exists natural-sounding to pause near judging this word, this analysis is based at least one predetermined threshold value that is used for word, and the quantity of the syllable between the end in this predetermined threshold value and this word and the text string two ends is associated; With
The natural-sounding pause is inserted in the synthetic speech signal output expression of text string.
Preferably, described at least one predetermined threshold value comprises P word (P_word) threshold value, and it is based on the quantity of the syllable between starting end and this word.
Preferably, described at least one predetermined threshold value comprises F word (F_word) threshold value, and it is based on the quantity that finishes the syllable between end and this word.
Preferably, described at least one predetermined threshold value is determined by following step:
Training set to oral account record (transcription) provides at least one to be paused by the natural-sounding that identifier identified that inserts;
Word in each oral account record is designated P word and F word;
P word and F word that statistics ground analyzing and training is concentrated;
From The result of statistics, determine F word threshold value and P word threshold value.
Preferably, the natural-sounding of insertion pauses and also can comprise and be designated the pause that part of speech (POS) pattern is paused naturally.
Preferably, the natural-sounding of insertion pauses and also can comprise and be designated the pause that portmanteau word pauses naturally.
Description of drawings
In order to make easy to understand of the present invention and to put into practice, will come in conjunction with the accompanying drawings now with reference to the preferred embodiment shown in quoting, wherein:
Fig. 1 is the schematic block diagram according to electronic equipment of the present invention;
Fig. 2 has illustrated the method 200 that is used for definite threshold value that is associated with the natural-sounding pause of text string;
Fig. 3 A has illustrated the oral account record example of the method that is used for Fig. 2 to 3D.
Fig. 4 has illustrated the method for the natural-sounding pause that is used for the Automatic Logos text string; With
Fig. 5 is the detailed description of the analytical procedure of Fig. 4.
Embodiment
Referring to Fig. 1, show electronic equipment 100 with wireless telephonic form, this electronic equipment 100 comprises device handler 102, and it is connected to user interface 104 effectively by bus 103, and typically, user interface 104 is touch screen or display screen and keypad.Electronic equipment 100 also has language corpus 106, voice operation demonstrator 110, nonvolatile memory 120, ROM (read-only memory) 118 and wireless communication module 116, and they all are connected to processor 102 effectively by bus 103.Voice operation demonstrator 110 has output terminal, and this output terminal connects and driving loudspeaker 112.Corpus 106 comprises the speech waveform PUW expression of word or phoneme and correlated sampling, digitized and that handled.In other words, as described below, nonvolatile memory 120 (memory module) provides and has been used for the synthetic text string of literary composition language conversion (TTS) (text can be received by module 116 or miscellaneous equipment).The waveform language corpus also comprises the oral account record of expression phrase and corresponding sampling and digitized speech waveform and is positioned at text string with the position of natural pause boundary-related as described below.
As the skilled person will be apparent, typically, radio frequency communications unit 116 is a receiver and a transmitter with combination of common antenna.This radio frequency communications unit 116 has the transceiver that is connected to antenna by radio frequency amplifier.This transceiver is also connected to the public modulator/demodulator that communication unit 116 is connected to processor 102.Simultaneously, in this embodiment, nonvolatile memory 120 (memory module) stores programmable phonebook database Db, and ROM (read-only memory) 118 stores the operation code (OC) that is used for device handler 102.
Referring to Fig. 2, the method 200 that is used for definite threshold value that is associated with the natural-sounding pause of text string has been described.This threshold value is based on the forward and backward a plurality of syllables in the record of the oral account among the training set TS.After beginning step 210, method 200 is implemented step 220 is provided, and being used for provides at least one to be paused by the natural-sounding that manual punctuation mark that inserts or identifier " | " are identified to the training set TS of oral account record (some sentences typically).Fig. 3 A has illustrated such oral account record or sentence example in 3D.One 300 in these oral account records is " Based on our history|in China, ", and it has natural-sounding and pauses 310 between word " history " and " in ".For oral account record 300, a starting end 305 and an end end 315 are arranged.As the skilled person will be apparent, Fig. 3 A has at least one natural-sounding pause 310 and starting end 305 and finishes end 315 to all oral account records 300 among the 3D.These are given an oral account shown in further being analyzed as follows of record:
Based=2 syllable
On=1 syllable
Our=1 syllable
History=3 syllable
In=1 syllable
China=2 syllable
Simultaneously, each word in the oral account record can be designated as: (i) P word: be close in the oral account record front, by the word of pause naturally of punctuation mark " | " sign; (ii) F word: be close in the oral account record back, by the word of pause naturally of punctuation mark " | " sign; (iii) medium term: the word that the next door does not have natural-sounding to pause in the oral account record.After step 220, identification of steps 230 will be designated (i) P word to the word in each oral account record; (ii) F word; Or (iii) medium term.Thus, for oral account record " Based onour history|in China, ", following table 1 has identified the attribute of each word in the oral account record:
Word The P word The F word Syllable quantity Pause
Based N N 0 N
on N N 2 N
our N N 3 N
history N Y 4 After
in Y N 7 Before
China N N 1 N
The analysis of table 1 pair oral account record " Based on our history in China "
Then, method 200 is carried out statistical study step 240.In this step 240, if the training set TS that is provided has 90,000 oral account records (for example sentence) and supposition word " in " has occurred 10 in training set, 000 time words, for these 10,000 examples of " in ", can observe following statistical study so:
(i) quantity=8,000 examples of (OPW) appear in " in " as the P word;
(ii) quantity=1,000 example of (OFW) appears in " in " as the F word;
(iii) quantity=1,000 example of (ONW) appears in " in " as middle word (neither P word, neither F word);
Further, in the appearance of 8,000 examples of " in " that from training set TS, identifies, can observe following statistical study as the P word:
(i) 8 or more syllable (OPS)=0 appear in the front;
(ii) 7 syllables (OPS)=400 appear in the front;
(iii) 6 syllables (OPS)=600 appear in the front;
(iv) 5 syllables (OPS)=2,000 appear in the front;
(v) 4 syllables (OPS)=3,000 appear in the front;
(vi) 3 syllables (OPS)=1,000 appear in the front;
(vii) 2 syllables (OPS)=1,000 appear in the front;
(viii) 1 syllable (OPS)=0 appears in the front;
Intuition and selected inspiration rate (heuristic ratio) HR of test are 0.75, and it is used for determining the P word pause threshold value PT of word " in ".This threshold value PT determines that in definite threshold value step 250 its step is as follows:
Minimum number from the maximum quantity of observed syllable to observed syllable is carried out from the OPS of maximum, up to:
OPS and/OPW 0.75
PT is chosen for quantity by the observed syllable that last OPS identified in the OPS summation;
Finish.
Therefore, the PT of " in " will determine as follows in step 250:
400/8,7 of 000=0.05 are syllable the preceding;
(400+600)/8,6 of 000=0.125 syllable the preceding;
(400+600+2,000)/8,5 of 000=0.375 are syllable the preceding;
(400+600+2,000+3,000)/8,4 of 000=0.75 are syllable the preceding;
Therefore PT is chosen as 4.
Use similar statistical study to come to determine the F word pause threshold value of " in ", reuse 0.75 inspiration rate HR in step 250.Simultaneously, determine PT and FT value (using 0.75 inspiration rate HR) for the example of all other P words of all other words among the training set TS and F word.Method 200 finishes in step 260 subsequently, and all the P words of all words and the example of F word all are stored in the nonvolatile memory 120 among the training set TS.
Referring to Fig. 4, the method 400 of the natural-sounding pause that is used for Automatic Logos text string STR has been described, the literary composition language that this pause is used for carrying out on electronic equipment 100 is changed.After beginning step 410, method 400 implements to obtain the step 420 of the text string STR that comprises two ends, and these two ends are starting end SE and finish end FE.Select word step 430 to select a word (perhaps portmanteau word CW), analytical procedure 440 is used for analyzing at least one word (or portmanteau word CW) of text string STR, near judging this word (or portmanteau word CW), whether exist natural-sounding to pause, this analysis is based at least one predetermined threshold value (PT or FT) of this word, and the quantity of the syllable between the end in the two ends of this threshold value and this word and text string is associated.Threshold value comprises P word threshold value PT, and it is based on the quantity of the syllable between starting end and this word.Threshold value also comprises F word threshold value FT, and it is based on the quantity that finishes the syllable between end and this word.
If testing procedure 450 determining steps 440 have identified pause,, will insert the natural-sounding pause and be used for phonetic synthesis so in step 460.Pause otherwise will can not insert for the word of selecting in step 430.Then,, check, just turn back to step 430 if also have word not analyze to have judged whether by analysis all words among the text string STR in step 470.Otherwise, phonetic synthesis step 480 will use corpus 106 to carry out phonetic synthesis at compositor 110, and one or more natural-soundings pauses (being inserted among the text string STR in step 460) that wherein will occur are inserted in the synthetic speech signal output expression of text string STR.
Referring to Fig. 5, the more detailed figure of analytical procedure 440 has been described.At first, check text string STR, whether have part of speech (POS) pattern and pause naturally to judge it in step 441.The example that the POS pattern is paused naturally is as follows:
1. number+noun
For example: two thousand books
2. verb+adverbial word
For example: look carefully
3. preposition+noun
For example: with telescopes
4. adjective+noun
For example: beautiful city
If determine to have pause in step 441, will carry out step 446 so, this pause is identified as the F word and pauses.If determine not pause in step 441, will check text string STR in step 442 so, whether have the portmanteau word insertion pause that pauses naturally to judge it.The example that portmanteau word pauses naturally is as follows:
a bit of
a body of
a few
a fleet of
a flooding of
a fraction of
a function of
a good deal
a good deal of
a great deal
a great deal of
a hint of
a large body of
a large number of
a lot ofland
a majority of
If determine to have pause in step 442, will carry out step 446 so, this pause is identified as the F word and pauses.If determine not pause to be identified in step 442,, will carry out a test to judge whether to have reached the P word threshold value PT of selected word so in step 443.Quantity by the syllable between starting end and the selected word among the comparison text string STR is carried out this judgement.If reached the P word threshold value PT of selected word, will determine to exist nature to pause so, and it is designated the pause of P word in step 444.In addition, do not identified,, will be carried out a test to judge whether to have reached the F word threshold value FT of selected word so in step 445 if pause in step 443.Carry out this judgement by comparing the quantity that finishes the syllable between end and the selected word among the text string STR.If reached the F word threshold value FT of selected word, will determine to exist nature to pause so, and it is designated the pause of F word in step 446.Otherwise not pausing in step 447 is identified.
The invention has the advantages that allow the natural-sounding in the sign text string to pause, it is synthetic to be used for literary composition language conversion (TTS), improves the quality of synthetic speech thus.
Above detail specifications has only provided preferred example embodiment, and and be not intended to limit the scope of the invention, applicability or configuration.The detailed description of preferred example embodiment is in order to make those skilled in the art can realize preferred example embodiment of the present invention.Be to be understood that under the prerequisite of the spirit and scope of the present invention of in not deviating from, being set forth, on the function of element and structure, can make multiple change as claims.

Claims (6)

1. method that the natural-sounding that is used for the Automatic Logos text string pauses, this pause is used among the literary composition language conversion of carrying out on the electronic equipment, and this method comprises:
Obtain the described text string that comprises two ends, these two ends are starting ends and finish end;
Whether at least one word of analyzing in the described text string exists natural-sounding to pause near judging described word, described analysis is used for the predetermined threshold value of described word based at least one, and the quantity of the syllable between the end in the described two ends of described predetermined threshold value and described word and text string is associated; With
Described natural-sounding pause is inserted in the synthetic speech signal output expression of text string.
2. the method that the natural-sounding that is used for the Automatic Logos text string as claimed in claim 1 pauses, wherein, described at least one predetermined threshold value comprises P word threshold value, it is based on the quantity of the syllable between described starting end and the described word.
3. the method that the natural-sounding that is used for the Automatic Logos text string as claimed in claim 1 pauses, wherein, described at least one predetermined threshold value comprises F word threshold value, it is based on the quantity of the syllable between described end end and the described word.
4. the method that the natural-sounding that is used for the Automatic Logos text string as claimed in claim 1 pauses, wherein, described at least one predetermined threshold value is determined by following step:
Training set to the oral account record provides at least one to be paused by the natural-sounding that identifier identified that inserts;
Word in each described oral account record all is designated P word and F word;
Described P word and the F word in the described training set analyzed on statistics ground;
From described The result of statistics, determine described F word threshold value and P word threshold value.
5. the method that the natural-sounding that is used for the Automatic Logos text string as claimed in claim 1 pauses, wherein, the natural-sounding of described insertion pauses and also can comprise and be designated the pause that the part of speech pattern is paused naturally.
6. the method that the natural-sounding that is used for the Automatic Logos text string as claimed in claim 1 pauses, wherein, the natural-sounding of described insertion pauses and also can comprise and be designated the pause that portmanteau word pauses naturally.
CNB031327087A 2003-09-29 2003-09-29 Natural voice pause in identification text strings Expired - Lifetime CN1320482C (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CNB031327087A CN1320482C (en) 2003-09-29 2003-09-29 Natural voice pause in identification text strings
KR1020067006094A KR20060056403A (en) 2003-09-29 2004-09-17 Identifying natural speech pauses in a text string
EP04784433A EP1668631A4 (en) 2003-09-29 2004-09-17 Identifying natural speech pauses in a text string
PCT/US2004/030570 WO2005034085A1 (en) 2003-09-29 2004-09-17 Identifying natural speech pauses in a text string
RU2006114740/09A RU2319221C1 (en) 2003-09-29 2004-09-17 Method for identification of natural speech pauses in a text string

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB031327087A CN1320482C (en) 2003-09-29 2003-09-29 Natural voice pause in identification text strings

Publications (2)

Publication Number Publication Date
CN1604183A CN1604183A (en) 2005-04-06
CN1320482C true CN1320482C (en) 2007-06-06

Family

ID=34398361

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB031327087A Expired - Lifetime CN1320482C (en) 2003-09-29 2003-09-29 Natural voice pause in identification text strings

Country Status (5)

Country Link
EP (1) EP1668631A4 (en)
KR (1) KR20060056403A (en)
CN (1) CN1320482C (en)
RU (1) RU2319221C1 (en)
WO (1) WO2005034085A1 (en)

Families Citing this family (124)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
CN1260704C (en) * 2003-09-29 2006-06-21 摩托罗拉公司 Method for voice synthesizing
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
WO2008075076A2 (en) * 2006-12-21 2008-06-26 Symbian Software Limited Communicating information
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8712776B2 (en) 2008-09-29 2014-04-29 Apple Inc. Systems and methods for selective text to speech synthesis
US8352272B2 (en) 2008-09-29 2013-01-08 Apple Inc. Systems and methods for text to speech synthesis
US8396714B2 (en) 2008-09-29 2013-03-12 Apple Inc. Systems and methods for concatenation of words in text to speech synthesis
US8352268B2 (en) 2008-09-29 2013-01-08 Apple Inc. Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
WO2010067118A1 (en) 2008-12-11 2010-06-17 Novauris Technologies Limited Speech recognition involving a mobile device
US8380507B2 (en) 2009-03-09 2013-02-19 Apple Inc. Systems and methods for determining the language to use for speech generated by a text to speech engine
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8977584B2 (en) 2010-01-25 2015-03-10 Newvaluexchange Global Ai Llp Apparatuses, methods and systems for a digital conversation management platform
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
KR102380145B1 (en) 2013-02-07 2022-03-29 애플 인크. Voice trigger for a digital assistant
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
KR101759009B1 (en) 2013-03-15 2017-07-17 애플 인크. Training an at least partial voice command system
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
CN110442699A (en) 2013-06-09 2019-11-12 苹果公司 Operate method, computer-readable medium, electronic equipment and the system of digital assistants
KR101809808B1 (en) 2013-06-13 2017-12-15 애플 인크. System and method for emergency calls initiated by voice command
KR101749009B1 (en) 2013-08-06 2017-06-19 애플 인크. Auto-activating smart responses based on activities from remote devices
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
WO2015184186A1 (en) 2014-05-30 2015-12-03 Apple Inc. Multi-command single utterance input method
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9542929B2 (en) 2014-09-26 2017-01-10 Intel Corporation Systems and methods for providing non-lexical cues in synthesized speech
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
CN110970013A (en) * 2019-12-23 2020-04-07 出门问问信息科技有限公司 Speech synthesis method, device and computer readable storage medium
CN111667816B (en) * 2020-06-15 2024-01-23 北京百度网讯科技有限公司 Model training method, speech synthesis method, device, equipment and storage medium
CN114664283A (en) * 2022-02-28 2022-06-24 阿里巴巴(中国)有限公司 Text processing method in speech synthesis and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0659695A (en) * 1992-08-11 1994-03-04 A T R Jido Honyaku Denwa Kenkyusho:Kk Voice regulation synthesizing device
CN1099165A (en) * 1994-04-01 1995-02-22 清华大学 Chinese written language-phonetics transfer method and system based on waveform compilation
US5634086A (en) * 1993-03-12 1997-05-27 Sri International Method and apparatus for voice-interactive language instruction
CN1331446A (en) * 2000-06-22 2002-01-16 上海贝尔有限公司 By-pass method for dialing access service of internet
JP2002311982A (en) * 2001-04-19 2002-10-25 Nippon Telegr & Teleph Corp <Ntt> Method, device and program for setting rhythm information, and recording medium
JP2003015680A (en) * 2001-07-03 2003-01-17 Nec Corp System, method and program for synthesizing voice

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05160773A (en) * 1991-12-03 1993-06-25 Toshiba Corp Voice communication equipment
US5692104A (en) * 1992-12-31 1997-11-25 Apple Computer, Inc. Method and apparatus for detecting end points of speech activity
JPH08508127A (en) * 1993-10-15 1996-08-27 エイ・ティ・アンド・ティ・コーポレーション How to train a system, the resulting device, and how to use it

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0659695A (en) * 1992-08-11 1994-03-04 A T R Jido Honyaku Denwa Kenkyusho:Kk Voice regulation synthesizing device
US5634086A (en) * 1993-03-12 1997-05-27 Sri International Method and apparatus for voice-interactive language instruction
CN1099165A (en) * 1994-04-01 1995-02-22 清华大学 Chinese written language-phonetics transfer method and system based on waveform compilation
CN1331446A (en) * 2000-06-22 2002-01-16 上海贝尔有限公司 By-pass method for dialing access service of internet
JP2002311982A (en) * 2001-04-19 2002-10-25 Nippon Telegr & Teleph Corp <Ntt> Method, device and program for setting rhythm information, and recording medium
JP2003015680A (en) * 2001-07-03 2003-01-17 Nec Corp System, method and program for synthesizing voice

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汉语语句中短语间停顿的自动预测方法 聂鑫,王作英,中文信息学报,第17卷第4期 2003 *

Also Published As

Publication number Publication date
CN1604183A (en) 2005-04-06
WO2005034085A1 (en) 2005-04-14
KR20060056403A (en) 2006-05-24
RU2319221C1 (en) 2008-03-10
EP1668631A4 (en) 2008-05-14
EP1668631A1 (en) 2006-06-14

Similar Documents

Publication Publication Date Title
CN1320482C (en) Natural voice pause in identification text strings
CN1260704C (en) Method for voice synthesizing
CN1108603C (en) Voice synthesis method and device, and computer ready-read medium with recoding voice synthesizing program
US7809572B2 (en) Voice quality change portion locating apparatus
CN108364632B (en) Emotional Chinese text voice synthesis method
CN105206257B (en) A kind of sound converting method and device
US20110144997A1 (en) Voice synthesis model generation device, voice synthesis model generation system, communication terminal device and method for generating voice synthesis model
EP1396794A3 (en) Method and apparatus for expanding dictionaries during parsing
CN105304080A (en) Speech synthesis device and speech synthesis method
CN1946065A (en) Method and system for remarking instant messaging by audible signal
CN1731510B (en) Text-speech conversion for amalgamated language
CN103035251A (en) Method for building voice transformation model and method and system for voice transformation
CN1750121A (en) A kind of pronunciation evaluating method based on speech recognition and speech analysis
CN101051459A (en) Base frequency and pause prediction and method and device of speech synthetizing
US20060229877A1 (en) Memory usage in a text-to-speech system
CN1932976A (en) Method and system for realizing caption and speech synchronization in video-audio frequency processing
CN1032391C (en) Chinese character-phonetics transfer method and system edited based on waveform
CN113409761B (en) Speech synthesis method, speech synthesis device, electronic device, and computer-readable storage medium
CN101577115A (en) Voice input system and voice input method
CN103474067B (en) speech signal transmission method and system
CN1841496A (en) Method and apparatus for measuring speech speed and recording apparatus therefor
CN1787072A (en) Method for synthesizing pronunciation based on rhythm model and parameter selecting voice
CN1956057A (en) Voice time premeauring device and method based on decision tree
EP1668630B1 (en) Improvements to an utterance waveform corpus
CN1811912A (en) Minor sound base phonetic synthesis method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: NIUANSI COMMUNICATION CO., LTD.

Free format text: FORMER OWNER: MOTOROLA INC.

Effective date: 20101008

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: ILLINOIS STATE, USA TO: DELAWARE STATE, USA

TR01 Transfer of patent right

Effective date of registration: 20101008

Address after: Delaware

Patentee after: NUANCE COMMUNICATIONS, Inc.

Address before: Illinois, USA

Patentee before: Motorola, Inc.

CX01 Expiry of patent term

Granted publication date: 20070606

CX01 Expiry of patent term