US20220189458A1 - Speech based user recognition - Google Patents
Speech based user recognition Download PDFInfo
- Publication number
- US20220189458A1 US20220189458A1 US17/584,489 US202217584489A US2022189458A1 US 20220189458 A1 US20220189458 A1 US 20220189458A1 US 202217584489 A US202217584489 A US 202217584489A US 2022189458 A1 US2022189458 A1 US 2022189458A1
- Authority
- US
- United States
- Prior art keywords
- data
- user
- speech
- component
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 90
- 238000012545 processing Methods 0.000 claims description 82
- 230000008569 process Effects 0.000 claims description 32
- 230000015654 memory Effects 0.000 claims description 20
- 238000012795 verification Methods 0.000 abstract description 14
- 239000013598 vector Substances 0.000 description 94
- 238000012549 training Methods 0.000 description 80
- 238000003860 storage Methods 0.000 description 42
- 238000000605 extraction Methods 0.000 description 30
- 230000015572 biosynthetic process Effects 0.000 description 24
- 238000003786 synthesis reaction Methods 0.000 description 24
- 238000013528 artificial neural network Methods 0.000 description 19
- 238000001514 detection method Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000010801 machine learning Methods 0.000 description 11
- 238000013500 data storage Methods 0.000 description 10
- 238000005096 rolling process Methods 0.000 description 9
- 239000000945 filler Substances 0.000 description 8
- 238000013459 approach Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000001815 facial effect Effects 0.000 description 7
- 230000007423 decrease Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000011049 filling Methods 0.000 description 4
- 239000003607 modifier Substances 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000003066 decision tree Methods 0.000 description 3
- 238000011143 downstream manufacturing Methods 0.000 description 3
- 235000003642 hunger Nutrition 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 241001123946 Gaga Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000003490 calendering Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010224 classification analysis Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012048 forced swim test Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/06—Decision making techniques; Pattern matching strategies
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/01—Assessment or evaluation of speech recognition systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
- G10L15/25—Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
Definitions
- Speech recognition systems have progressed to the point where humans can interact with computing devices by speaking. Such systems employ techniques to identify the words spoken by a human user based on the various qualities of a received audio input. Speech recognition combined with natural language understanding processing techniques enable speech-based user control of a computing device to perform tasks based on the user's spoken commands. The combination of speech recognition and natural language understanding processing techniques is referred to herein as speech processing. Speech processing may also involve converting a user's speech into text data which may then be provided to various text-based software applications.
- Speech processing may be used by computers, hand-held devices, telephone computer systems, kiosks, and a wide variety of other devices to improve human-computer interactions.
- FIG. 1 illustrates a system for verifying an identity of a user that speaks an utterance according to embodiments of the present disclosure.
- FIG. 2 is a conceptual diagram of how a spoken utterance may be processed according to embodiments of the present disclosure.
- FIG. 3 is a conceptual diagram of a system architecture for parsing incoming utterances using multiple domains according to embodiments of the present disclosure.
- FIG. 4 is a conceptual diagram of how text-to-speech processing is performed according to embodiments of the present disclosure.
- FIG. 5 illustrates data stored and associated with user profiles according to embodiments of the present disclosure.
- FIG. 6 is a flow diagram illustrating processing performed to prepare audio data for ASR and user verification according to embodiments of the present disclosure.
- FIG. 7 is a diagram of a vector encoder according to embodiments of the present disclosure.
- FIG. 8 is a system flow diagram illustrating user verification according to embodiments of the present disclosure.
- FIG. 9 is a block diagram conceptually illustrating example components of a device according to embodiments of the present disclosure.
- FIG. 10 is a block diagram conceptually illustrating example components of a server according to embodiments of the present disclosure.
- FIG. 11 illustrates an example of a computer network for use with the system.
- ASR Automatic speech recognition
- NLU natural language understanding
- TTS Text-to-speech
- Speech processing systems have become robust platforms enabled to perform a variety of speech related tasks such as playing music, controlling household devices, communicating with other users, shopping, etc.
- Speech processing systems may process a spoken utterance to obtain content responsive thereto (for example output music, news content, or the like).
- Speech processing systems may also process a spoken utterance, and therefrom perform TTS processing to create computer-generated speech responsive to the spoken utterance thus enabling the system to engage in a conversation with a user and provide feedback and prompts in spoken form.
- Identification of a user refers to determining an identity of a user.
- Verification refers to confirming a previously determined identity of a user.
- Recognition of users refers to both identification and verification of a user.
- Some speech processing systems may perform user recognition through speech analysis (i.e., the system may verify whether a spoken utterance originated from a particular user by using the audio data of the utterance) and/or through different techniques. Such user verification may be desired prior to performing certain tasks such as controlling a home device, accessing a user's sensitive information, or addressing the user by name in a TTS response.
- two different users may speak the same utterance, but intend to receive different content in response thereto.
- two different users may both state “Play my music,” with one of the users intending to have a classical music playlist played and the other user intending to have a rock music playlist played.
- speech processing systems may utilize user recognition to influence which eventual content is determined to be responsive to a spoken utterance.
- Traditional user recognition determines a similarity between speech characteristics (i.e., features) of a spoken utterance and speech characteristics of test/training speech.
- the test/training speech is often spoken during enrollment of a user with the speech processing system. For example, when a user enrolls in the speech processing system, the system may request the user speak certain phrases, which are stored and used for comparison purposes during user recognition.
- the present disclosure improves traditional user recognition, and increases the accuracy thereof, by incorporating a confidence output of an ASR component into the user recognition analysis. Instead of simply using speech characteristics of a spoken utterance to determine a similarity with stored test/training speech characteristics, as is the case with traditional speech based user recognition, the present disclosure may use both the characteristics of the spoken utterance as well as ASR confidence output.
- the present disclosure also improves traditional user recognition by incorporating secondary information into the user recognition analysis.
- secondary information may include, for example, where an audio capture device is located within a building (e.g., home), a type of the audio capture device, a geographic location of the audio capture device, etc.
- the secondary information may include image data (such as from a camera that may capture an image of a speaking user), other device data (such as knowing where a user's wearable device is relative to an audio capture device), or the like.
- the teachings of the present disclosure are beneficial because they decrease the amount of false user verifications, decrease the likelihood of unintentional divulgation of data to an incorrectly recognized user, and generally improve system performance.
- FIG. 1 illustrates a system 100 configured to recognize an identity of a user according to the present disclosure.
- the system 100 may include one or more speech-controlled devices 110 local to a user 5 , and one or more servers 120 connected to the speech-controlled device(s) 110 across one or more networks 199 .
- the server(s) 120 (which may be one or more different physical devices) may be capable of performing speech processing (e.g., ASR, NLU, command processing, etc.), TTS processing, and user recognition as described herein.
- a single server 120 may perform all speech processing, TTS processing, and user recognition. Alternatively, multiple servers 120 may combine to perform all speech processing, TTS processing, and user recognition. Further, the server(s) 120 may execute certain commands, such as answering spoken utterances of the user 5 . In addition, certain speech detection or command execution functions may be performed by the speech-controlled device 110 .
- the speech-controlled device 110 may capture various speech (i.e., input audio 11 ) of the user 5 via a microphone 103 of the speech-controlled device 110 .
- capturing of the training speech may occur as part of enrolling the user with the speech-controlled device 110 /system 100 .
- the speech-controlled device 110 may then send training data corresponding to the training speech to the server(s) 120 .
- a microphone array (not illustrated), separate from the speech-controlled device 110 , may capture the training speech.
- the microphone array is in direct communication with the speech-controlled device 110 such that when the microphone array captures the training speech, the microphone array sends the training data to the speech-controlled device 110 .
- the speech-controlled device 110 may then forward the received training data to the server(s) 120 .
- the microphone array is in indirect communication with the speech-controlled device 110 via a companion application of a mobile computing device, such as a smart phone, tablet, laptop, etc.
- the microphone array when the microphone array captures the training speech, the microphone array sends the training data to the companion application, which forwards the training data to the speech-controlled device 110 .
- the speech-controlled device 110 may then forward the training data to the server(s) 120 .
- the microphone array is in indirect communication with the server(s) 120 via the companion application such that when the microphone array captures the training speech, the microphone array sends the training data to the companion application, which forwards the training data to the server(s) 120 .
- the server(s) 120 receives ( 150 ) the training data and associates ( 152 ) the training data with the user 5 in a user profile associated with the speech-controlled device 110 from which the training data originated. It should be appreciated that the server(s) 120 may receive ( 150 ) training data from multiple speech-controlled devices 110 of the system 100 , and may store the training data with respective users and user profiles.
- the server(s) 120 may simply store, in the user profile, waveforms of training data without determining features/vectors of the training data.
- features/vectors of the training data may be determined each time the server(s) 120 attempts to compare features/vectors of a spoken utterance to the training data.
- the server(s) 120 may determine features/vectors of the training data and associate ( 152 ) the features/vectors of the training data with the user in the user profile. This allows the server(s) 120 to only determine the features/vectors of the training data once, thereby negating duplicative processing.
- the microphone 103 of the speech-controlled device 110 captures an utterance (i.e., input audio 11 ) spoken by the user 5 .
- the server(s) 120 receives ( 154 ) input audio data corresponding to the spoken utterance from the speech-controlled device 110 or a companion application (depending upon implementation as described herein above).
- the server(s) 120 may perform ( 156 ) ASR on the input audio data to determine ASR confidence data indicating an ASR confidence.
- the server(s) 120 may also perform ( 158 ) user recognition using the input audio data to determine user recognition confidence data indicating a user verification confidence.
- Performing user recognition may include comparing speech characteristics (i.e., features/vectors) of the input audio data to speech characteristics of training data stored in a user profile associated with the speech-controlled device 110 from which the input audio data was received.
- the server(s) 120 may perform ( 156 ) ASR and perform ( 158 ) user recognition in parallel.
- the server(s) 120 may also receive ( 160 ) secondary data, such as where the speech-controlled device 110 (or microphone array) is located within a building (e.g., home), a type of the speech-controlled device 110 , and a geographic location of the speech-controlled device 110 , for example.
- the server(s) 120 may use ( 162 ) at least one of the ASR confidence data or the secondary data to alter/modify the originally determined user recognition confidence data.
- user recognition may be performed using training data captured while enrolling the user with the system 100 /speech-controlled device 110 .
- user recognition may be performed without using training data captured during an enrollment process.
- reference data used to perform user recognition may be captured during runtime (i.e., when the user interacts with the system 100 at runtime by, for example, speaking commands).
- FIG. 2 is a conceptual diagram of how a spoken utterance is processed.
- the various components illustrated may be located on a same or different physical devices. Communication between various components illustrated in FIG. 2 may occur directly or across a network 199 .
- An audio capture component such as the microphone 103 of the speech-controlled device 110 (or other device), captures input audio 11 corresponding to a spoken utterance.
- the device 110 uses a wakeword detection component 220 , then processes audio data corresponding to the input audio 11 to determine if a keyword (such as a wakeword) is detected in the audio data.
- a keyword such as a wakeword
- the speech-controlled device 110 sends audio data 111 , corresponding to the utterance, to a server 120 that includes an ASR component 250 .
- the audio data 111 may be output from an acoustic front end (AFE) 256 located on the device 110 prior to transmission, or the audio data 111 may be in a different form for processing by a remote AFE 256 , such as the AFE 256 located with the ASR component 250 .
- AFE acoustic front end
- the wakeword detection component 220 works in conjunction with other components of the device 110 , for example the microphone 103 to detect keywords in audio data corresponding to the input audio 11 .
- the device 110 may convert input audio 11 into audio data, and process the audio data with the wakeword detection component 220 to determine whether speech is detected, and if so, if the audio data comprising speech matches an audio signature and/or model corresponding to a particular keyword.
- the device 110 may use various techniques to determine whether audio data includes speech. Some embodiments may apply voice activity detection (VAD) techniques. Such techniques may determine whether speech is present in input audio based on various quantitative aspects of the input audio, such as a spectral slope between one or more frames of the input audio; energy levels of the input audio in one or more spectral bands; signal-to-noise ratios of the input audio in one or more spectral bands; or other quantitative aspects. In other embodiments, the device 110 may implement a limited classifier configured to distinguish speech from background noise. The classifier may be implemented by techniques such as linear classifiers, support vector machines, and decision trees.
- VAD voice activity detection
- HMM Hidden Markov Model
- GMM Gaussian Mixture Model
- the device 110 may use the wakeword detection component 220 to perform wakeword detection to determine when a user intends to speak a command to the device 110 .
- This process may also be referred to as keyword detection, with the wakeword being a specific example of a keyword.
- keyword detection is typically performed without performing linguistic analysis, textual analysis, or semantic analysis. Instead, incoming audio (or audio data) is analyzed to determine if specific characteristics of the audio match preconfigured acoustic waveforms, audio signatures, or other data to determine if the incoming audio “matches” stored audio data corresponding to a keyword.
- the wakeword detection component 220 may compare audio data to stored models or data to detect a wakeword.
- One approach for wakeword detection applies general large vocabulary continuous speech recognition (LVCSR) systems to decode the audio signals, with wakeword searching conducted in the resulting lattices or confusion networks. LVCSR decoding may require relatively high computational resources.
- Another approach for wakeword spotting builds HMMs for each wakeword and non-wakeword speech signals respectively.
- the non-wakeword speech includes other spoken words, background noise, etc.
- Viterbi decoding is used to search the best path in the decoding graph, and the decoding output is further processed to make the decision on keyword presence.
- the wakeword spotting system may be built on DNN/recursive neural network (RNN) structures directly, without HMI involved.
- RNN recursive neural network
- Such a system may estimate the posteriors of wakewords with context information, either by stacking frames within a context window for DNN, or using RNN.
- follow-on posterior threshold tuning or smoothing is applied for decision making.
- Other techniques for wakeword detection such as those known in the art, may also be used.
- the local device 110 may “wake” and begin transmitting audio data 111 corresponding to input audio 11 to the server(s) 120 for speech processing (e.g., for purposes of executing a command in the speech).
- the audio data 111 may include data corresponding to the wakeword, or the portion of the audio data corresponding to the wakeword may be removed by the local device 110 prior to sending the audio data 111 to the server 120 .
- an ASR component 250 may convert the audio data 111 into text data.
- the ASR component 250 transcribes the audio data 111 into text data representing words of speech contained in the audio data 111 .
- the text data may then be used by other components for various purposes, such as executing system commands, inputting data, etc.
- a spoken utterance in the audio data 111 is input to a processor configured to perform ASR, which then interprets the spoken utterance based on a similarity between the spoken utterance and pre-established language models 254 stored in an ASR model knowledge base (i.e., ASR model storage 252 ).
- the ASR component 250 may compare the audio data 111 with models for sounds (e.g., subword units or phonemes) and sequences of sounds to identify words that match the sequence of sounds spoken in the spoken utterance of the audio data 111 .
- the different ways a spoken utterance may be interpreted may each be assigned a respective probability/confidence score representing a likelihood that a particular set of words matches those spoken in the spoken utterance.
- the confidence score may be based on a number of factors including, for example, a similarity of the sound in the spoken utterance to models for language sounds (e.g., an acoustic model 253 stored in the ASR model storage 252 ), and a likelihood that a particular word that matches the sound would be included in the sentence at the specific location (e.g., using a language model 254 stored in the ASR model storage 252 ).
- each potential textual interpretation of the spoken utterance i.e., hypothesis
- each potential textual interpretation of the spoken utterance i.e., hypothesis
- the ASR component 250 Based on the considered factors and the assigned confidence score, the ASR component 250 outputs the most likely text recognized in the audio data 111 .
- the ASR component 250 may also output multiple hypotheses in the form of a lattice or an N-best list with each hypothesis corresponding to a confidence score or other score (e.g., such as probability scores, etc.).
- the device or devices including the ASR component 250 may include an AFE 256 and a speech recognition engine 258 .
- the AFE 256 transforms the audio data 111 into data for processing by the speech recognition engine 258 . Such transformation is discussed in further detail with regard to FIG. 6 below.
- the speech recognition engine 258 compares the speech recognition data with acoustic models 253 , language models 254 , and other data models and information for recognizing the speech conveyed in the audio data 111 .
- the AFE 256 may reduce noise in the audio data 111 and divide the digitized audio data 111 into frames representing time intervals for which the AFE 256 determines a number of values (i.e., features) representing qualities of the audio data 111 , along with a set of those values (i.e., a feature vector or audio feature vector) representing features/qualities of the audio data 111 within each frame.
- each audio frame includes 25 ms of audio and the frames start at 10 ms intervals resulting in a sliding window where adjacent audio frames include 15 ms of overlapping audio.
- Many different features may be determined, as known in the art, and each feature represents some quality of the audio data 111 that may be useful for ASR processing.
- a number of approaches may be used by the AFE 256 to process the audio data 111 , such as mel-frequency cepstral coefficients (MFCCs), perceptual linear predictive (PLP) techniques, neural network feature vector techniques, linear discriminant analysis, semi-tied covariance matrices, or other approaches known to those skilled in the art.
- MFCCs mel-frequency cepstral coefficients
- PLP perceptual linear predictive
- neural network feature vector techniques such as linear discriminant analysis, semi-tied covariance matrices, or other approaches known to those skilled in the art.
- the speech recognition engine 258 may process data output from the AFE 256 with reference to information stored in the ASR model storage 252 .
- post-AFE processed data e.g., feature vectors
- the speech-controlled device 110 may process audio data 111 into feature vectors (e.g., using an on-device AFE 256 ) and transmit the feature vector data to the server 120 across the network 199 for ASR processing.
- Feature vector data may arrive at the server 120 encoded, in which case it may be decoded prior to processing by the processor executing the speech recognition engine 258 .
- the speech recognition engine 258 attempts to match received feature vectors to language phonemes and words as known in the stored acoustic models 253 and language models 254 .
- the speech recognition engine 258 computes recognition scores for the feature vectors based on acoustic information and language information.
- the acoustic information is used to calculate an acoustic score representing a likelihood that the intended sound represented by a group of feature vectors matches a language phoneme.
- the language information is used to adjust the acoustic score by considering what sounds and/or words are used in context with each other, thereby improving a likelihood that the ASR component 250 will output speech results that make sense grammatically.
- the speech recognition engine 258 may use a number of techniques to match feature vectors to phonemes, for example using HMMs to determine probabilities that feature vectors may match phonemes. Sounds received may be represented as paths between states of the HMM and multiple paths may represent multiple possible text matches for the same sound.
- the ASR results may be sent by the speech recognition engine 258 to other processing components, which may be local to the device performing ASR and/or distributed across the network(s) 199 .
- ASR results in the form of a single textual representation of the speech, an N-best list including multiple hypotheses and respective scores, lattice, etc. may be sent to a server, such as the server 120 , for natural language understanding (NLU) processing, such as conversion of the text data into commands for execution, either by the speech-controlled device 110 , by the server 120 , or by another device (e.g., a server running a search engine, etc.).
- NLU natural language understanding
- the ASR component 250 may output text data 300 for further processing by an NLU 260 , where the text data 300 may include a single top scoring hypothesis or a N-best list including multiple hypotheses. Further, the ASR component 250 may output ASR confidence score data 807 for further processing by a user verification component 802 (discussed below) or other component.
- the ASR confidence score data 807 may include a respective score for each hypothesis in an N-best list or may include a single score for the top hypothesis output as the text data 300 .
- the ASR confidence score data 807 may include general confidence data, such as one or more values that indicate how generally confident the ASR component 250 was in its processing, without necessarily linking that confidence to a specific hypothesis.
- the ASR confidence score data 807 may be based on various factors such as audio quality, whether the hypotheses had similar scores or whether one hypothesis largely outscored the others, or other factors.
- the device performing NLU processing may include various components, including potentially dedicated processor(s), memory, storage, etc.
- the device performing NLU processing may include a dedicated NLU component/component 260 , which may include a named entity recognition (NER) component 262 , and intent classification (IC) component 264 .
- the device performing NLU processing may additionally include NLU storage 273 , and a knowledge base (not illustrated).
- the knowledge base is a database or other information storage that may include information about entities that may be used in resolving spoken utterances.
- the NLU component 260 may also utilize gazetteer information 284 stored in an entity library storage 282 .
- Gazetteers 284 may be linked to users (e.g., a particular gazetteer may be associated with a specific user's music collection), may be linked to certain domains (e.g., shopping), or may be organized in a variety of other ways.
- the NLU component 260 takes text data (e.g., output from the ASR component 250 based on the input audio data 111 ) and attempts to make a semantic interpretation of the text data. That is, the NLU component 260 determines the meaning behind the text data based on the individual words and then implements that meaning. The NLU component 260 interprets a text string to derive an intent or a desired action from the user as well as the pertinent pieces of information in the text data that allow a device (e.g., the speech-controlled device 110 , the server 120 , an application server, etc.) to complete that action.
- a device e.g., the speech-controlled device 110 , the server 120 , an application server, etc.
- the NLU component 260 may determine the user intended to activate a telephone in his/her device and to initiate a call with a contact matching the entity “mom.”
- the NLU component 260 may process several textual inputs related to the same utterance. For example, if the ASR component 250 outputs N text segments (e.g., as part of an N-best list), the NLU component 260 may process all N outputs to obtain NLU results.
- the NLU component 260 may be configured to parse and tag to annotate text data as part of NLU processing. For example, for the text data “call mom,” “call” may be tagged as a command (e.g., to execute a phone call) and “mom” may be tagged as a specific entity and target of the command. In addition, the telephone number for the entity corresponding to “mom” stored in a contact list may be included in the annotated NLU results.
- the NLU component 260 may be configured to determine a “domain” of the utterance so as to determine and narrow down which services offered by an endpoint device (e.g., the server 120 , the speech-controlled device 110 , an application server, etc.) may be relevant.
- an endpoint device may offer services relating to interactions with a telephone service, a contact list service, a calendar/scheduling service, a music player service, etc.
- Words in text data may implicate more than one service, and some services may be functionally linked (e.g., both a telephone service and a calendar service may utilize data from the contact list).
- the NER component 262 receives an utterance in the form of ASR results and attempts to identify relevant grammars and lexical information that may be used to construe meaning. To do so, the NER component 262 may begin by identifying potential domains that may relate to the received utterance.
- the NLU storage 273 includes a database of domains 274 associated with specific devices.
- the speech-controlled device 110 may be associated with domains for music, telephony, calendaring, contact lists, and device-specific communications.
- the entity library 282 may include database entries about specific services on a specific device, either indexed by Device ID, User ID, Household ID, or some other indicator.
- a domain may represent a discrete set of activities having a common theme, such as “shopping”, “music”, “calendaring”, etc. As such, each domain may be associated with a particular language model and/or grammar database 276 , a particular set of intents/actions 278 , and/or a particular personalized lexicon 286 .
- Each gazetteer 284 may include domain-indexed lexical information associated with a particular user and/or device.
- the Gazetteer A 284 a includes domain-index lexical information 286 aa to 286 an .
- a user's music-domain lexical information might include album titles, artist names, and song names, for example, whereas a user's contact-list lexical information might include the names of contacts. Since every user's music collection and contact list is presumably different, this personalized information improves entity resolution.
- An utterance may be processed applying the rules, models, and information applicable to each identified domain. For example, if an utterance potentially implicates both communications and music, the utterance will be NLU processed using the grammar models and lexical information for communications, and will also be processed using the grammar models and lexical information for music. The responses to the spoken utterance produced by each set of models is scored, with the overall highest ranked result from all applied domains being ordinarily selected to be the correct result. This is described further in detail below in reference to FIG. 3 .
- An IC component 264 parses the utterance to determine an intent(s) for each identified domain, where the intent(s) corresponds to the action to be performed that is responsive to the spoken utterance.
- Each domain is associated with a database 278 of words linked to intents.
- a music intent database may link words and phrases such as “quiet,” “volume off,” and “mute” to a “mute” intent.
- the IC component 264 identifies potential intents for each identified domain by comparing words in the utterance to the words and phrases in the intents database 278 .
- Each grammar model 276 includes the names of entities (i.e., nouns) commonly found in speech about the particular domain (i.e., generic terms), whereas the lexical information 286 from the gazetteer 284 is personalized to the user and/or the device.
- a grammar model 276 associated with a shopping domain may include a database of words commonly used when people discuss shopping.
- the intents identified by the IC component 264 are linked to domain-specific grammar frameworks (included in 276 ) with “slots” or “fields” to be filled.
- a grammar framework(s) may correspond to sentence structures such as “Play ⁇ Artist Name ⁇ ,” “Play ⁇ Album Name ⁇ ,” “Play ⁇ Song name ⁇ ,” “Play ⁇ Song name ⁇ by ⁇ Artist Name ⁇ ,” etc.
- these frameworks would ordinarily not be structured as sentences, but rather based on associating slots with grammatical tags.
- the NER component 262 may parse the spoken utterance to identify words as subject, object, verb, preposition, etc., based on grammar rules and models, prior to recognizing named entities.
- the identified verb may be used by the IC component 264 to identify intent, which is then used by the NER component 262 to identify frameworks.
- a framework for an intent of “play” may specify a list of slots/fields applicable to play the identified “object” and any object modifier (e.g., a prepositional phrase), such as ⁇ Artist Name ⁇ , ⁇ Album Name ⁇ , ⁇ Song name ⁇ , etc.
- the NER component 262 searches the corresponding fields in the domain-specific and personalized lexicon(s), attempting to match words and phrases in the utterance tagged as a grammatical object or object modifier with those identified in the database(s).
- This process includes semantic tagging, which is the labeling of a word or combination of words according to their type/semantic meaning. Parsing may be performed using heuristic grammar rules, or the NER component 262 may be constructed using techniques such as HMIs, maximum entropy models, log linear models, conditional random fields (CRF), and the like.
- an utterance of “play mother's little helper by the rolling stones” might be parsed and tagged as ⁇ Verb ⁇ : “Play,” ⁇ Object ⁇ : “mother's little helper,” ⁇ Object Preposition ⁇ : “by,” and ⁇ Object Modifier ⁇ : “the rolling stones.”
- “Play” is identified as a verb based on a word database associated with the music domain, which the IC component 264 will determine corresponds to the “play music” intent. No determination has been made as to the meaning of “mother's little helper” and “the rolling stones,” but based on grammar rules and models, it is determined that these phrases relate to the grammatical object of the spoken utterance.
- the frameworks linked to the intent are then used to determine what database fields should be searched to determine the meaning of these phrases, such as searching a user's gazette for similarity with the framework slots. So a framework for “play music intent” might indicate to attempt to resolve the identified object based on ⁇ Artist Name ⁇ , ⁇ Album Name ⁇ , and ⁇ Song name ⁇ , and another framework for the same intent might indicate to attempt to resolve the object modifier based on ⁇ Artist Name ⁇ , and resolve the object based on ⁇ Album Name ⁇ and ⁇ Song Name ⁇ linked to the identified ⁇ Artist Name ⁇ .
- the NER component 262 may search the database of generic words associated with the domain (i.e., in the NLU storage 273 ). For instance, if the utterance was “play songs by the rolling stones,” after failing to determine an album name or song name called “songs” by “the rolling stones,” the NER component 262 may search the domain vocabulary for the word “songs.” In the alternative, generic words may be checked before the gazetteer information, or both may be tried, potentially producing two different results.
- the comparison process used by the NER component 262 may classify (i.e., score) how closely a database entry compares to a tagged utterance word or phrase, how closely the grammatical structure of the utterance corresponds to the applied grammatical framework, and based on whether the database indicates a relationship between an entry and information identified to fill other slots of the framework.
- the NER component 262 may also use contextual operational rules to fill slots. For example, if a user had previously requested to pause a particular song and thereafter requested that the speech-controlled device 110 “please un-pause my music,” the NER component 262 may apply an inference-based rule to fill a slot associated with the name of the song that the user currently wishes to play—namely the song that was playing at the time the user requested to pause the music.
- the results of NLU processing may be tagged to attribute meaning to the utterance. So, for example, “play mother's little helper by the rolling stones” might produce a result of: ⁇ domain ⁇ Music, ⁇ intent ⁇ Play Music, ⁇ artist name ⁇ “rolling stones,” ⁇ media type ⁇ SONG, and ⁇ song title ⁇ “mother's little helper.” As another example, “play songs by the rolling stones” might produce: ⁇ domain ⁇ Music, ⁇ intent ⁇ Play Music, ⁇ artist name ⁇ “rolling stones,” and ⁇ media type ⁇ SONG.
- the output from the ASR component 250 may be sent to a user recognition component 802 .
- the user recognition component 802 performs user recognition using the audio data 111 , and optionally the ASR component output.
- the user recognition component 802 may include a scoring component that determines respective scores indicating whether the input utterance in the audio data 111 was spoken by particular users.
- the user recognition component 802 may also include a confidence component that determines an overall confidence as the accuracy of user recognition operations.
- the output from the NLU processing which may include tagged text data, commands, etc., and output of the user recognition component 802 (e.g., user recognition confidence data) may then be sent to a command processor 290 , which may be located on a same or separate server 120 as part of the system 100 .
- the system 100 may include more than one command processor 290 , and the command processor 290 may be determined based on the NLU output. For example, if the NLU output includes a command to play music, the command processor 290 selected may correspond to a music playing application, such as one located on the speech-controlled device 110 or in a music playing appliance, configured to execute a music playing command. Many such command processors 290 may be available to the system depending on the various applications that may be invoked.
- the command processor 290 selected may include a search engine processor, such as one located on a search server, configured to execute a search command and determine search results, which may include output text data to be processed by a TTS engine and output from a device as synthesized speech.
- a search engine processor such as one located on a search server, configured to execute a search command and determine search results, which may include output text data to be processed by a TTS engine and output from a device as synthesized speech.
- the NLU operations of the system 100 may take the form of a multi-domain architecture, such as that illustrated in FIG. 3 .
- each domain (which may include a set of intents and entity slots that define a larger concept such as music, books etc. as well as components such as trained models, etc. used to perform various NLU operations such as NER, IC, or the like) may be constructed separately and made available to the NLU component 260 during runtime operations where NLU operations are performed on text (such as text output from the ASR component 250 ).
- Each domain may have specially configured components to perform various steps of the NLU operations.
- the system 100 may include a multi-domain architecture consisting of multiple domains for intents/commands executable by the system 100 (or by other devices connected to the system 100 ), such as music, video, books, and information.
- the system 100 may include a plurality of domain recognizers 335 , where each domain may include its own recognizer 263 .
- Each recognizer may include various NLU components such as an NER component 262 , IC component 264 and other components such as an entity resolver, or other components.
- a music domain recognizer 263 -A may have an NER component 262 -A that identifies what slots (i.e., portions of input text data 300 ) may correspond to particular words relevant to that domain.
- the words may correspond to entities such as (for the music domain) a performer, album name, song name, etc.
- An NER component 262 may use a machine learning model, such as a domain specific conditional random field (CRF) to both identify the portions corresponding to an entity as well as identify what type of entity corresponds to the text data portion.
- CRF domain specific conditional random field
- an NER 262 -A trained for a music domain may recognize the portion of text [the stones] corresponds to an entity and an artist name.
- the music domain recognizer 263 -A may also have its own intent classification (IC) component 264 -A that determines the intent of the text assuming that the text is within the proscribed domain.
- An IC component 264 may use a model, such as a domain specific maximum entropy classifier to identify the intent of the text, where the intent is the action the user desires the system 100 to perform.
- domains may operate substantially in parallel, with different domain specific components. That is, domain B for video may have its own recognizer 263 -B including NER component 262 -B, and IC component 264 -B. Domain C for books may also have similar components in its recognizer 263 -C, and so on for the different domains available to the system.
- the same text data that is input into the NLU pipeline for domain A 263 -A may also be input into the NLU pipeline for domain B 263 -B, where the components for domain B 263 -B will operate on the text data as if the text data related to domain B, and so on for the different NLU pipelines for the different domains.
- Each domain specific NLU pipeline will create its own domain specific NLU results, for example NLU results A (for domain A), NLU results B (for domain B), NLU results C (for domain C), and so on.
- the different NLU results may then be ranked and further processed using other downstream components as explained below.
- an NER component 262 and IC component 264 may be considered part of a recognizer (such as recognizer 263 -A, 263 -B, 263 -C, etc.).
- the recognizers may operate using machine learning trained models such as a CRF, maximum entropy classifier, neural network (such as a deep neural network (DNN) or recurrent neural network (RNN)), or other classifier.
- the recognizers 335 may also use rules that operate on input query text data in a particular form to identify named entities and/or intents.
- the recognizers 335 may also operate using a data structure such as a finite state transducer (FST) to process the query text data to perform NER and/or IC.
- FST finite state transducer
- NER and/or IC may also be used to perform NER and/or IC.
- the techniques may also be used together. For example, a set of rules, an FST, and a trained machine learning model may all operate on input text data substantially in parallel to determine the named entities/intents of an input utterance. If one technique performs its task with high enough confidence, the system 100 may use the output of that technique over the others. The system 100 may also prioritize the results of different techniques in certain circumstances (for example rules results may be higher priority than FST results, which may be higher priority than model results, or some other configuration).
- Each recognizer (such as 263 -A, 263 -B, 263 -C, etc.) may have its own rules, FSTs, and/or models operating such that each recognizer operates substantially in parallel to the other recognizers to come up with its own interpretation of the input text data.
- each recognizer 335 is a N-best list 340 of intents and identified slots representing the particular recognizer's top choices as to the meaning of the input text data, along with scores for each item in the N-best list 340 .
- the music domain recognizer 263 -A may output an N-best list 340 in the form of:
- the NER component 262 -A of the recognizer 263 -A has determined that for different items in the N-best list 340 , the words “poker face” correspond to a slot and the words “lady gaga” correspond to a slot. (Though different items in the N-best list 340 interpret those slots differently, for example labeling “poker face” as a song name in one choice but labeling it as an album name in another.)
- the IC component 264 -A of the recognizer 263 -A has also determined that the intent of the input text data 300 is a PlayMusicIntent (and selected that as the intent for each item on the music N-best list 340 ).
- the recognizer 263 -A also determined a score for each item on the list representing the recognizer's confidence that the particular item is correct. As can be seen in the example, the top item has the highest score.
- Each recognizer of the recognizers 335 may operate on the input text data 300 substantially in parallel, resulting in a number of different N-best lists 340 , one for each domain (e.g., one N-best 340 list for music, one N-best list 340 for video, etc.).
- the size of any particular N-best list 340 output from a particular recognizer is configurable and may be different across domains.
- the recognizers 335 While the recognizers 335 perform NER, that is they may identify words of the input text data 300 that are important for downstream processing (sometimes called light slot filling), and may even label those words according to type (such as artist name, album name, city, or the like), the recognizers 335 may not perform entity resolution (i.e., determining the actual entity corresponding to the words of the input text data 300 ). Entity resolution is typically a higher latency process and involves communications with a knowledge base 272 or other component to precisely identify the specific entities. As this process is resource intensive, it may be preferable to not perform this task for each item of every N-best list 340 across the multiple domains as some items have low scores and are unlikely to be used and any resources spent performing entity resolution would be wasted on low scoring items.
- a filtering of potential results may first be performed before engaging in more resource intensive processing.
- the cumulative N-best lists 340 may be passed to a cross domain processing component 355 which may then further rank the individual items on the N-best lists 340 as well as perform other operations.
- the cross domain processing component 355 may include a cross-domain ranker 350 .
- the cross-domain ranker 350 takes the group of N-best lists 340 and selects from among the lists 340 the top choices to create a new N-best list 360 that may include items from different domains, but only includes the highest scoring ones of those domains.
- the purpose of the cross-domain ranker 350 is to create a new list of top scoring potential results, so that downstream (more resource intensive) processes may only operate on the top choices.
- a multi-domain N-best list 360 created by the cross-domain ranker 350 take the example input text data 300 of “play the hunger games.”
- the text data 300 may be processed by each of the recognizers 335 , and each will output an N-best list 340 , resulting in a group of N-best lists 340 input into the cross domain processing component 355 .
- the cross-domain ranker 350 may then rank the individual items among the N-best lists 340 to create a new N-best list 360 .
- the cross-domain ranker 350 may output an N-best list 360 in the form of:
- Video PlayVideoIntent VideoName The Hunger Games
- the top scoring item is from the video domain 263 -B, which includes the intent “playvideointent” and a slot labeled as video name corresponding to the text “the hunger games.”
- the next item is from the books domain 263 -C, and includes the intent “readbookintent” and a slot labeled as book name corresponding to the text “the hunger games.”
- Each item in the N-best list 360 may also include a score.
- the size of the cross domain N-best list 360 is configurable.
- the cross-domain ranker 350 takes as input the N-best lists 340 , it may also consider other information, such as other data 391 .
- the cross domain processing component 355 may also include a light slot filler component 352 .
- the light slot filler 352 can take information from slots and alter it to make the data more easily processed by downstream components
- the operations of the light slot filler 352 are typically low latency operations that do not involve heavy operations such as reference to a knowledge base.
- the purpose of the light slot filler 352 is to replace words with other words or values that may be more easily understood by downstream components. For example, if input text data 300 included the word “tomorrow”, the light slot filler 352 may replace the word “tomorrow” with an actual date for purposes of downstream processing. Similarly, a word “CD” may be replaced by a word “album.” The replaced words are then included in the cross domain N-best list 360 .
- the N-best list 360 is then output to a heavy slot filler and entity resolution component 370 .
- the heavy slot filler and entity resolver 370 can apply rules or other instructions to standardize labels or tokens from previous stages into an intent/slot representation. The precise transformation may depend on the domain (for example, for a travel domain a text mention of “Boston airport” may be transformed to the standard BOS three-letter code referring to the airport).
- the resolver 370 can refer to an authority source (such as a knowledge base 272 ) that is used to specifically identify the precise entity referred to in the entity mention identified in the incoming text.
- Specific intent/slot combinations may also be tied to a particular source, which may then be used to resolve the text (such as by providing information or a command to be executed in response to a user query).
- the resolver 370 may reference to a personal music catalog, Amazon Music account, user profile (discussed in detail below), or the like.
- the output from the entity resolution component 370 may include an altered N-best list that is based on the cross-domain N-best list 360 but also includes more detailed information about the specific entities mentioned in the text data 300 (such as specific entity IDs) and/or more detailed slot data that can eventually be used by a command processor 290 which may be incorporated into the same system components or pipeline or may be on a separate device in communication with the system 100 . While illustrated as a cross-domain resolver, multiple resolution components 370 may exist where a particular resolution component 370 may be specific for one or more domains.
- the entity resolver 370 may not necessarily be successful in resolving every entity and filling every slot. This may result in incomplete results in the combined N-best list 360 .
- a re-scorer and final ranker component 390 may consider such errors when determining how to rank the ultimate results for potential execution. For example, if an item of the N-best list 360 comes from a book domain and includes a read book intent, but the entity resolver 370 cannot find a book with a title matching the input text data 300 , that particular result may be re-scored by the final ranker 390 to be given a lower score.
- Each item considered by the final ranker 390 may also be assigned a particular confidence, where the confidence may be determined by a recognizer 335 , cross domain processor 355 , or by the final ranker 390 itself. Those confidence scores may be used to determine how to rank the individual NLU results represented in the N-best lists 340 .
- the confidence scores may be affected by unfilled slots. For example, if one domain is capable of filling a slot (i.e., resolving the word in the slot to an entity or other recognizable form) for an input query, the results from that domain may have a higher confidence than those from a different domain that is not capable of filling a slot.
- the final ranker 390 may be configured to apply re-scoring, biasing, or other techniques to obtain the most preferred ultimate result. To do so, the final ranker 390 may consider not only the NLU results of the N-best lists 340 , but may also consider other data 391 .
- This other data 391 may include a variety of information.
- the other data 391 may include application rating or popularity. For example, if one application has a particularly high rating, the system 100 may increase the score of results associated with that particular application.
- the other data 391 may also include information about applications that have been specifically enabled by the user (as indicated in a user profile as discussed in detail below). NLU results from enabled applications may be scored higher than results from non-enabled applications.
- User history may also be considered, such as if the user regularly uses a particular supplemental application or does so at particular times of day. Date, time, location, weather, type of device 110 , customer ID, context, and other information may also be considered.
- the system 100 may consider when any particular applications are currently active (such as music being played, a game being played, etc.) between the system 100 and device 110 .
- the highest scoring result (or results in the case of multiple commands being in an utterance) may be passed to a downstream command processor 290 for execution.
- the NLU component 260 may output NLU output data 385 .
- the NLU output data 385 may include an indicator of the command to be executed along with data associated with the command, for example an indication that the command is “play music” and the music to be played is “Adele.”
- the NLU output data 385 may be in the form of previous NLU data such as item(s) in N-best list 340 , item(s) in cross-domain N-best list 360 , or the like.
- the NLU output data 385 may also be in a format executable by the command processor 290 . Multiple instances of NLU output data (e.g., 385 a - 385 n ) may be output.
- a TTS component 414 may receive tagged text data from the command processor 290 , so the TTS component 414 may synthesize speech corresponding to the text data. Speech may be synthesized by the TTS component 414 as described below with respect to FIG. 4 .
- the TTS component 414 includes a TTS front end (TTSFE) 416 , a speech synthesis engine 418 , and a TTS storage 420 .
- the TTSFE 416 transforms input text data (e.g., from the command processor 290 ) into a symbolic linguistic representation for processing by the speech synthesis engine 418 .
- the TTSFE 416 may also process tags or other data input to the TTS component 414 that indicate how specific words should be pronounced.
- the speech synthesis engine 418 compares the annotated phonetic units and information stored in the TTS storage 420 for converting the input text data into speech (i.e., audio data).
- the TTSFE 416 and the speech synthesis engine 418 may include their own controller(s)/processor(s) and memory, or they may use the controller/processor and memory of the server 120 , the speech-controlled device 110 , or another device, for example.
- the instructions for operating the TTSFE 416 and the speech synthesis engine 418 may be located within the TTS component 414 , within the memory and/or storage of the server 120 , the speech-controlled device 110 , or within an external device.
- Text data input into the TTS component 414 may be sent to the TTSFE 416 for processing.
- the TTSFE 416 may include components for performing text normalization, linguistic analysis, and linguistic prosody generation. During text normalization, the TTSFE 416 processes the input text data and generates standard/normalized text data, converting such things as numbers, abbreviations (e.g., Apt., St., etc.), and symbols ($, %, etc.) into the equivalent of written out words.
- the TTSFE 416 analyzes language in the normalized text data to generate a sequence of phonetic units corresponding to the text data. This process may be referred to as phonetic transcription.
- Phonetic units include symbolic representations of sound units to be eventually combined and output by the system 100 as speech.
- Various sound units may be used for dividing text data for purposes of speech synthesis.
- the TTS component 414 may process speech based on phonemes (i.e., individual sounds), half-phonemes, di-phones (i.e., the last half of one phoneme coupled with the first half of an adjacent phoneme), bi-phones (i.e., two consecutive phonemes), syllables, words, phrases, sentences, or other units.
- Each word may be mapped to one or more phonetic units. Such mapping may be performed using a language dictionary stored by the system 100 , for example in the TTS storage 420 .
- the linguistic analysis performed by the TTSFE 416 may also identify different grammatical components such as prefixes, suffixes, phrases, punctuation, syntactic boundaries, or the like. Such grammatical components may be used by the TTS component 414 to craft a natural sounding audio waveform output.
- the language dictionary may also include letter-to-sound rules and other tools that may be used to pronounce previously unidentified words or letter combinations that may be encountered by the TTS component 414 . Generally, the more information included in the language dictionary, the higher quality the speech output.
- the TTSFE 416 may then perform linguistic prosody generation where the phonetic units are annotated with desired prosodic characteristics, also called acoustic features, which indicate how the desired phonetic units are to be pronounced in the eventual output speech.
- desired prosodic characteristics also called acoustic features
- the TTSFE 416 may consider and incorporate any prosodic annotations that accompanied the text data input to the TTS component 414 .
- Such acoustic features may include pitch, energy, duration, and the like.
- Application of acoustic features may be based on prosodic models available to the TTS component 414 . Such prosodic models indicate how specific phonetic units are to be pronounced in certain circumstances.
- a prosodic model may consider, for example, a phoneme's position in a syllable, a syllable's position in a word, a word's position in a sentence or phrase, neighboring phonetic units, etc. As with the language dictionary, a prosodic model with more information may result in higher quality speech output than prosodic models with less information.
- the output of the TTSFE 416 may include a sequence of phonetic units annotated with prosodic characteristics.
- This symbolic linguistic representation may be sent to the speech synthesis engine 418 , also known as a synthesizer, for conversion into an audio waveform of speech for output to an audio output device and eventually to a user.
- the speech synthesis engine 418 may be configured to convert the input text data into high-quality natural-sounding speech in an efficient manner. Such high-quality speech may be configured to sound as much like a human speaker as possible, or may be configured to be understandable to a listener without attempts to mimic a precise human voice.
- the speech synthesis engine 418 may perform speech synthesis using one or more different methods.
- a unit selection engine 430 matches the symbolic linguistic representation created by the TTSFE 416 against a database of recorded speech, such as a database of a voice corpus (e.g., the TTS voice unit storage 472 ).
- the unit selection engine 430 matches the symbolic linguistic representation against spoken audio units in the database. Matching units are selected and concatenated together to form a speech output.
- Each unit includes an audio waveform corresponding with a phonetic unit, such as a short .wav file of the specific sound, along with a description of the various acoustic features associated with the .wav file (e.g., its pitch, energy, etc.), as well as other information, such as where the phonetic unit appears in a word, sentence, or phrase, neighboring phonetic units, etc.
- the unit selection engine 430 may match units to the input text data to create a natural sounding waveform.
- the unit database may include multiple examples of phonetic units to provide the system 100 with many different options for concatenating units into speech.
- One benefit of unit selection is that, depending on the size of the database, a natural sounding speech output may be generated. As described above, the larger the unit database of the voice corpus, the more likely the system 100 will be able to construct natural sounding speech.
- parametric synthesis In another method of synthesis, called parametric synthesis, parameters such as frequency, volume, and noise are varied by a parametric synthesis engine 432 , a digital signal processor, or other audio generation device to create an artificial speech waveform output.
- Parametric synthesis uses a computerized voice generator, sometimes called a vocoder.
- Parametric synthesis may use an acoustic model and various statistical techniques to match a symbolic linguistic representation with desired output speech parameters.
- Parametric synthesis may include the ability to be accurate at high processing speeds, as well as the ability to process speech without large databases associated with unit selection, but also typically produces an output speech quality that may not match that of unit selection.
- Unit selection and parametric techniques may be performed individually or combined together and/or combined with other synthesis techniques to produce speech audio data output.
- the TTS component 414 may include an acoustic model, or other models, which may convert a symbolic linguistic representation into a synthetic acoustic waveform of the input text data based on audio signal manipulation.
- the acoustic model includes rules which may be used by the parametric synthesis engine 432 to assign specific audio waveform parameters to input phonetic units and/or prosodic annotations.
- the rules may be used to calculate a score representing a likelihood that a particular audio output parameter(s) (e.g., frequency, volume, etc.) corresponds to the portion of the input symbolic linguistic representation received from the TTSFE 416 .
- the parametric synthesis engine 432 may use a number of techniques to match speech to be synthesized with input phonetic units and/or prosodic annotations.
- One common technique is using HMMs.
- HMIs may be used to determine probabilities that audio output should match textual input.
- HMMs may be used to translate from parameters from the linguistic and acoustic space to parameters to be used by a vocoder (i.e., the digital voice encoder) to artificially synthesize the desired speech.
- a vocoder i.e., the digital voice encoder
- a number of states are presented, in which the states together represent one or more potential acoustic parameters to be output to the vocoder and each state is associated with a model, such as a Gaussian mixture model.
- Transitions between states may also have an associated probability, representing a likelihood that a current state may be reached from a previous state.
- Sounds to be output may be represented as paths between states of the HMI and multiple paths may represent multiple possible audio matches for the same input text data.
- Each portion of text data may be represented by multiple potential states corresponding to different known pronunciations of phonemes and their parts (e.g., phoneme identity, stress, accent, position, etc.).
- An initial determination of a probability of a potential phoneme may be associated with one state.
- the state may change or stay the same, based on processing of the new text data. For example, the pronunciation of a previously processed word might change based on later processed words.
- a Viterbi algorithm may be used to find the most likely sequence of states based on the processed text data.
- the HMMs may generate speech in parametrized form including parameters such as fundamental frequency (f0), noise envelope, spectral envelope, etc. that are translated by a vocoder into audio segments.
- the output parameters may be configured for particular vocoders such as a STRAIGHT vocoder, TANDEM-STRAIGHT vocoder, harmonic plus noise (HNM) based vocoders, code-excited linear prediction (CELP) vocoders, GlottHMM vocoders, harmonic/stochastic model (HSM) vocoders, or others.
- the system 100 may be configured with multiple voice inventories 478 (stored in the TTS voice unit storage 472 ), where each unit database is configured with a different “voice.” Such voice inventories may also be linked to user accounts, discussed below. For example, one voice corpus may be stored to be used to synthesize whispered speech (or speech approximating whispered speech), another may be stored to be used to synthesize excited speech (or speech approximating excited speech), and so on. To create the different voice corpuses a multitude of TTS training utterances may be spoken by an individual and recorded by the system 100 .
- the TTS training utterances used to train a TTS voice corpus may be different from the training utterances used to train an ASR system.
- the audio associated with the TTS training utterances may then be split into small audio segments and stored as part of a voice corpus.
- the individual speaking the TTS training utterances may speak in different voice qualities to create the customized voice corpuses, for example the individual may whisper the training utterances, say them in an excited voice, and so on.
- the audio data of each customized voice corpus may match a desired speech quality.
- the customized voice inventory 478 may then be used during runtime to perform unit selection to synthesize speech.
- one or more filters may be used to alter traditional TTS output to match a desired speech quality (e.g., whisper, shout, etc.).
- a desired speech quality e.g., whisper, shout, etc.
- the TTS component 414 may synthesize speech as normal, but the system 100 , either as part of the TTS component 414 or otherwise, may apply a filter to make the synthesized speech take on the desired speech quality. In this manner a traditional TTS output may be altered to take on the desired speech quality.
- the TTS component 414 may receive text data for speech synthesis along with an indicator for a desired speech quality of the output speech. The TTS component 414 may then select a voice matching the speech quality, either for unit selection or parametric synthesis, and synthesize speech using the received text data and speech quality indicator.
- FIG. 5 illustrates a user profile storage 502 that includes data regarding users of a device.
- the user profile storage 502 may be located proximate to the server(s) 120 , or may otherwise be in communication with various components of the system 100 , for example over the network 199 .
- the user profile storage 502 may include a variety of information related to individual users, accounts, etc. that interact with the system 100 .
- the user profile storage 502 is a cloud-based storage.
- the user profile storage 502 may include data regarding multiple users of a single speech-controlled device 110 (or other device).
- Each user indicated in a user profile associated with a speech-controlled device 110 may be associated with training data corresponding to training spoken utterances of the respective user, such as training data 805 discussed below.
- each user indicated in the user profile associated with a speech-controlled device 110 may be associated with feature/vector data corresponding to training data of the respective user.
- each user may have a user ID that identifies the specific user for further system processing.
- FIG. 6 illustrates processing performed to prepare audio data for ASR and user recognition.
- the speech-controlled device 110 sends audio data 111 through a network(s) 199 to the server(s) 120 for processing.
- the server(s) 120 may include an acoustic front end (AFE) 256 (or other component(s)) that performs various functions on the incoming audio data 111 to prepare the incoming audio data 111 for further downstream processing, such as ASR and/or user recognition.
- AFE 256 may perform ( 602 ) windowing functions on the audio data 111 to create framed audio data 603 (e.g., waveforms). The size of each frame may depend upon implementation.
- each frame may include twenty-five (25) milliseconds (m/s) of audio data, with an overlap of the next frame of 10 ms of data, thus resulting in sliding window processing of audio data.
- Performing a windowing function may include multiplying a time record by a finite-length window with an amplitude that varies smoothly and gradually toward zero at its edges. By performing such, the endpoints of the waveforms of respective frames of audio data meet, resulting in a continuous waveform without sharp transitions.
- the AFE 256 may then perform ( 604 ) a fast Fourier transform (FFT) that converts the waveforms in each frame of the framed audio data 603 from its original domain (e.g., time) to a representation in a frequency domain (thereby creating frequency domain framed audio data 605 ).
- FFT fast Fourier transform
- Audio processing techniques other than or in addition to FFT may be used to transform audio data (e.g., waveforms) into data that can be processed as needed.
- the server(s) 120 (through the AFE 256 or using another component) then detects ( 610 ) whether voice activity (i.e., speech) is present in the post-FFT waveforms (i.e., frequency domain framed audio data 605 ). In doing so the server(s) 120 may perform VAD operations discussed above.
- the VAD detector 610 (or other components) may also be configured in a different order, for example the VAD detector 610 may operate on input audio data 111 rather than on frequency domain framed audio data 605 , may operate on ASR features, etc.
- Various different configurations of components are possible.
- the server(s) 120 discards the frequency domain framed audio data 605 (i.e., removes the audio data from the processing stream). If, instead, the server(s) 120 detects speech in the frequency domain framed audio data 605 , the server(s) 120 , performs user recognition feature extraction ( 608 ) on the frequency domain framed audio data 605 .
- User recognition feature extraction ( 608 ) may include performing frame level feature extraction and/or utterance level feature extraction.
- the frame level feature extraction may determine which frame of a universal background model (UBM) the frame corresponds to.
- UBM universal background model
- the UBM may be a Gaussian mixture model, a deep neural network, etc.
- the utterance level feature extraction may analyze aligned speech frames to derive feature vectors of fixed length (i.e., the user recognition feature/vector data 609 ). The feature extraction may continue until voice activity is no longer detected in the input audio data, at which point the server(s) 120 may determine that an endpoint of the speech has been reached processing with respect thereto.
- ASR feature extraction ( 606 ) may be performed on all the audio data 111 received from the speech-controlled device 110 . Alternatively (not illustrated), ASR feature extraction ( 606 ) may only be performed on audio data including speech (as indicated by the VAD 610 ).
- ASR feature extraction ( 606 ) and user recognition feature extraction ( 608 ) involve determining values (i.e., features) representing qualities of the frequency domain framed audio data 605 , along with quantitating those features into values (i.e., feature vectors or audio feature vectors).
- ASR feature extraction ( 606 ) may determine ASR feature/vector data 607 useful for ASR processing, and user recognition feature extraction ( 608 ) may determine user recognition feature/vector data 609 useful for user recognition.
- the ASR feature/vector data 607 and the user recognition feature/vector data 609 may be the same features/vectors, different features/vectors, or may include some overlapping features/vectors.
- a number of approaches may be used to extract features/vectors from the frequency domain framed audio data 605 , such as MFCCs, PLP techniques, neural network feature vector techniques, linear discriminant analysis, semi-tied covariance matrices, or other approaches known to those skilled in the art.
- the ASR feature/vector data 607 may include a different audio feature vector for each audio frame.
- the ASR feature extraction component 606 may output a single ASR feature vector.
- the ASR feature vectors 607 output by the ASR feature extraction component 606 may be output to an ASR component 250 to perform speech recognition.
- the user recognition feature extraction component 608 may output multiple user recognition feature vectors, for example one such vector for each audio frame.
- the user recognition feature extraction component 608 may continue to input the frequency domain framed audio data 605 while the utterance is ongoing (e.g., before the endpoint of the speech is reached in the audio data 605 ). While the audio data 605 for the utterance is input, the user recognition feature extraction component 608 may accumulate or otherwise combine the audio data 605 as it comes in. That is, for a certain frame's worth of audio data 605 that comes in, the user recognition feature extraction component 608 may combine that frame's worth of data to the previous data received for the particular utterance.
- the particular method of accumulation may vary, including using an arithmetic component, a recurrent neural network (RNN), trained model, or other component capable of combining audio data.
- RNN recurrent neural network
- the form of combination performed by the user recognition feature extraction component 608 may depend on what audio qualities are determined to be important for ultimate user recognition.
- the user recognition feature extraction component 608 may be trained to isolate and process data that is most useful for user recognition.
- the output of the user recognition feature extraction component 608 may thus include user recognition feature/vector data 609 that includes values for features useful for user recognition.
- the resulting user recognition feature/vector data 609 may then be used for user recognition.
- the user recognition feature/vector data 609 may include multiple vectors each corresponding to different portions of the input utterance.
- the user recognition feature/vector data 609 may be a single vector representing audio qualities of the input utterance.
- the single vector may be created using an encoder 750 which can create a fixed-size vector to represent certain characteristics of the audio data entities as described below.
- feature values 702 through 706 (which may include feature vectors of audio data 111 , frequency domain framed audio data 605 , or the like) may be input into an encoder 750 which will output an encoded feature vector 710 that represents the input feature values.
- the VAD 610 may be an input into the encoder 750 such that the encoder 750 may only operate when feature values input therein correspond to speech.
- the individual feature values (e.g., 702 - 706 ) may correspond to specific audio frames.
- any particular encoder 750 will be configured to output vectors of the same size, thus ensuring a continuity of output encoded vector size from any particular encoder 750 (though different encoders may output vectors of different fixed sizes) and enabling comparing different feature vectors y.
- the value y may be called an embedding of the sequence x 1 , . . . x N .
- the length of x n and y are fixed and known a-priori, but the length of N of feature sequence x 1 , . . . x N is not necessarily known a-priori.
- the encoder E may be implemented as a neural network (NN), recurrent neural network (RNN), GMM, or other model.
- RNN recurrent neural network
- GMM global model
- One particular example is a long short-term memory (LSTM) RNN.
- LSTM long short-term memory
- FIG. 7 illustrates operation of the encoder 750 .
- the input feature value sequence starting with feature value x 1 702 , continuing through feature value x n 704 and concluding with feature value x N 706 is input into the encoder 750 .
- the encoder 750 may process the input feature values as noted above.
- the encoder 750 outputs the encoded feature vector y 710 , which is a fixed length feature vector of length F.
- the user recognition feature extraction component 608 may include an encoder 750 that receives audio feature values for a particular utterance as input, and outputs a fixed length encoded feature vector y 710 , which may be the user recognition feature/vector data 609 .
- the output feature vector 710 / 609 will be of the same length, thus allowing for more ease of performing user recognition by the user recognition component 802 .
- a final vector 609 may include many dimensions (e.g., several hundred), thus providing many datapoints for downstream consideration.
- the system may (for example using VAD detector 610 ) determine that voice activity is detected in input audio. This may indicate the beginning of the utterance, thus resulting in the system determining that the input utterance starts at a first point in audio data. Audio processing (for example by windowing 602 , FFT 604 , ASR feature extraction 606 , user recognition feature extraction 608 , ASR component 250 , or the like) may continue on the utterance audio data starting at the first point and continuing until the VAD detector 610 determines that voice activity is no detected at a second point in audio data. Thus, the system may determine that the input utterance ends at the second point.
- VAD detector 610 determines that voice activity is detected in input audio. This may indicate the beginning of the utterance, thus resulting in the system determining that the input utterance starts at a first point in audio data. Audio processing (for example by windowing 602 , FFT 604 , ASR feature extraction 606 , user recognition feature extraction 608 , ASR component 250 ,
- the first point may be considered the beginpoint of the utterance and the second point may be considered the endpoint of the utterance.
- the VAD detector 610 may signal the user recognition feature extraction component 608 when the beginpoint and/or endpoint is detected so that the user recognition feature extraction component 608 may begin processing audio data starting at the beginpoint and ending at the endpoint. Further, audio frames during the utterance that do not include speech may be filtered out by the VAD detector 610 and thus not considered by the ASR feature extraction 606 and/or user recognition feature extraction 608 .
- the resulting accumulated/processed speech audio data for the utterance may then be represented in a single feature vector for the user verification feature/vector data 609 , which may then be used for user recognition.
- FIG. 8 illustrates user recognition as performed by the server(s) 120 .
- the ASR component 250 performs ASR on the ASR feature/vector data 607 as described above.
- ASR output i.e., text data 300
- the NLU component 260 as described above.
- the ASR confidence data 807 is then passed to a user recognition component 802 .
- the user recognition component 802 of the server(s) 120 performs user recognition using various data including the user recognition feature/vector data 609 , training data 805 corresponding to sample audio data corresponding to known users, the ASR confidence data 807 and secondary data 809 .
- the user recognition component 802 may then output user recognition confidence data 811 which reflects a certain confidence that the input utterance was spoken by one or more particular users.
- the user recognition confidence data 811 may include an indicator of the verified user (such as a user ID corresponding to the speaker of the utterance) along with a confidence value corresponding to the user ID, such as a numeric value or binned value as discussed below.
- the training data 805 may be stored in a user recognition data storage 804 .
- the user recognition data storage 804 may be stored by the server(s) 120 , or may be a separate device. Further, the user recognition data storage 804 may be part of user profile storage 502 .
- the user recognition data storage 804 may be a cloud-based storage.
- the training data 805 stored in the user recognition data storage 804 may be stored as waveforms and/or corresponding features/vectors.
- the training data 805 may correspond to data from various audio samples, each audio sample associated with a known user and/or user identity. For example, each user known to the system may be associated with some set of training data 805 for the known user.
- the user recognition component 802 may then use the training data 805 to compare against incoming audio data (represented by user recognition feature/vector data 609 ) to determine the identity of a user speaking an utterance.
- the training data 805 stored in the user recognition data storage 804 may thus be associated with multiple users of multiple devices.
- the training data 805 stored in the storage 804 may be associated with both a user that spoke the respective utterance, as well as the speech-controlled device 110 that captured the respective utterance.
- the training data 805 for a particular user may include a feature vector of the same size as a vector of the user recognition feature/vector data 609 .
- the training data 805 may also be a feature vector of size F.
- the system may either prompt a user to speak sample audio data or may identify sample audio data known to have been spoken by a particular user.
- the system may then process the sample audio data (for example into feature values such as 702 - 706 and then by encoder 750 ) to create sample training data 805 (e.g., a feature vector of size F).
- the training data 805 may then be stored by the system (such as in data storage 804 ) and saved for use during runtime user recognition processing.
- the user recognition component 802 may determine the speech-controlled device 110 from which the audio data 111 originated.
- the audio data 111 may include a tag indicating the speech-controlled device 110 .
- Either the speech-controlled device 110 or the server(s) 120 may tag the audio data 111 as such.
- the tag indicating the speech-controlled device 110 may be associated with the user recognition feature/vector data 609 produced from the audio data 111 .
- the user recognition component 802 may send a signal to the user recognition data storage 804 , with the signal requesting only training data 805 associated with known users of the speech-controlled device 110 from which the audio data 111 originated.
- This may include accessing a user profile associated with the speech-controlled device 110 and then only inputting training data 805 associated with users corresponding to the user profile of the device 110 .
- the user recognition component 802 may access all (or some other subset of) training data 805 available to the system. However, accessing all training data 805 will likely increase the amount of time needed to perform user recognition based on the magnitude of training data to be processed.
- the user recognition component 802 may determine features/vectors of the waveform(s) or otherwise convert the waveform into a data format that can be used by the user recognition component 802 to actually perform the user recognition. The user recognition component 802 may then identify the user that spoke the utterance in the audio data 111 by comparing features/vectors of the user recognition feature/vector data 609 to training features/vectors (either received from the storage 804 or determined from training data 805 received from the storage 804 ).
- the user recognition component 802 may include a scoring component 822 which determines respective scores indicating whether the input utterance (represented by user verification feature/vector data 609 ) was spoken by particular users (represented by training data 805 ).
- the user recognition component 802 may also include a confidence component 824 which determines an overall confidence as the accuracy of the user recognition operations (such as those of the scoring component 822 ) and/or an individual confidence for each user potentially identified by the scoring component 822 .
- the output from the scoring component 822 may include scores for all users with respect to which user recognition was performed (e.g., all users associated with the speech-controlled device 110 ).
- the output may include a first score for a first user, a second score for a second user, and third score for a third user, etc.
- the scoring component 822 and confidence component 824 may be combined into a single component or may be separated into more than two components.
- the scoring component 822 and confidence component 824 may implement one or more trained machine learning models (such neural networks, classifiers, etc.) as known in the art.
- the scoring component 822 may use probabilistic linear discriminant analysis (PLDA) techniques. PLDA scoring determines how likely it is that the input user recognition feature vector 609 corresponds to a particular training data feature vector 805 for a particular user.
- the PLDA scoring may generate similarity scores for each training feature vector considered and may output the list of scores and users and/or the user ID of the speaker whose training data feature vector most closely corresponds to the input user recognition feature vector 609 of the utterance.
- the scoring component 822 may also use other techniques such as GMMs, generative Bayesian models, or the like, to determine similarity scores.
- the confidence component 824 may input various data including information about the ASR confidence 807 , utterance length (e.g., number of frames or time of the utterance), audio condition/quality data (such as signal-to-interference data or other metric data), fingerprint data, image data, or other factors to consider how confident the user recognition component 802 is with regard to the scores linking users to the input utterance.
- the confidence component 824 may also consider the similarity scores and user IDs output by the scoring component 822 . Thus, the confidence component 824 may determine that a lower ASR confidence 807 , or poor input audio quality, or other factors, may result in a lower confidence of the user recognition component 802 .
- a higher ASR confidence 807 may result in a higher confidence of the user recognition component 802 .
- Precise determination of the confidence may depend on configuration and training of the confidence component 824 and the models used therein.
- the confidence component 824 may operate using a number of different machine learning models/techniques such as GMM, neural networks, etc.
- the confidence component 824 may be a classifier configured to map a score output by the scoring component 822 to a confidence.
- the user recognition component 802 may output user recognition confidence data 811 specific to a single user, or multiple users in the form of an N-best list.
- the user recognition component 802 may output user recognition confidence data 811 with respect to each user indicated in the profile associated with the speech-controlled device 110 from which the audio data 111 was received.
- the user recognition confidence data 811 may include particular scores (e.g., 0.0-1.0, 0-1000, or whatever scale the system is configured to operate). Thus the system may output an N-best list of potential users with confidence scores (e.g., John—0.2, Jane—0.8).
- the user recognition confidence data 811 may include binned recognition indicators.
- a computed recognition score of a first range may be output as “low”
- a computed recognition score of a second range e.g., 0.34-0.66
- a computed recognition score of a third range e.g. 0.67-1.0
- the system may output an N-best list of potential users with binned scores (e.g., John—low, Jane—high). Combined binned and confidence score outputs are also possible.
- the user recognition confidence data 811 may only include information related to the top scoring user as determined by the user recognition component 802 .
- the scores and bins may be based on information determined by the confidence component 824 .
- the user recognition component 802 may also output a confidence value that the scores/bins are correct, where the confidence value indicates how confident the user recognition component 802 is in the output results. This confidence value may be determined by the confidence component 824 .
- the confidence component 824 may determine individual user confidences and differences between user confidences when determining the user recognition confidence data 811 . For example, if a difference between a first user's confidence score and a second user's confidence score is large, and the first user's confidence score is above a threshold, then the user recognition component 802 is able to recognize the first user as the user that spoke the utterance with a much higher confidence than if the difference between the users' confidences were smaller.
- the user recognition component 802 may perform certain thresholding to avoid incorrect user recognition results being output. For example, the user recognition component 802 may compare a confidence score output by the confidence component 824 to a confidence threshold. If the confidence score is not above the confidence threshold (for example, a confidence of “medium” or higher), the user recognition component 802 may not output user recognition confidence data 811 , or may only include in that data 811 an indication that a user speaking the utterance could not be verified. Further, the user recognition component 802 may not output user recognition confidence data 811 until enough user recognition feature/vector data 609 is accumulated and processed to verify the user above a threshold confidence. Thus the user recognition component 802 may wait until a sufficient threshold quantity of audio data of the utterance has been processed before outputting user recognition confidence data 811 . The quantity of received audio data may also be considered by the confidence component 824 .
- the user recognition component 802 may be defaulted to output binned (e.g., low, medium, high) user recognition confidence data 811 .
- binned e.g., low, medium, high
- the command processor(s) 290 may not be able to determine which user to determine content with respect to.
- the user recognition component 802 may be configured to override its default setting and output user recognition confidence data 811 including values (e.g., 0.0-1.0) associated with the users associated with the same binned confidence. This enables the command processor(s) 290 to select content associated with the user associated with the highest confidence value.
- the user recognition confidence data 811 may also include the user IDs corresponding to the potential user(s) who spoke the utterance.
- the user recognition component 802 may use secondary data 809 to inform user recognition processing.
- secondary data 809 may include a wide variety of data types depending on system configuration and may be made available from other sensors, devices, or storage such as user profile data 502 , etc.
- the second data 809 may include a time of day at which the audio data was captured, a day of a week in which the audio data was captured, the text data 300 , and/or the NLU results 385 .
- secondary data 809 may include image data or video data.
- facial recognition may be performed on image data or video data received from the speech-controlled device 110 from which the audio data 111 was received. Facial recognition may be performed by the user recognition component 802 , or another component of the server(s) 120 . The output of the facial recognition process may be used by the user recognition component 802 . That is, facial recognition output data may be used in conjunction with the comparison of the features/vectors of the audio data 111 and training data 805 to perform more accurate user recognition.
- the secondary data 809 may also include location data of the speech-controlled device 110 .
- the location data may be specific to a building within which the speech-controlled device 110 is located. For example, if the speech-controlled device 110 is located in user A's bedroom, such location may increase user recognition confidence data associated with user A, but decrease user recognition confidence data associated with user B.
- the secondary data 809 may further include type data indicating a type of the speech-controlled device 110 .
- Different types of speech-controlled devices may include, for example, a smart watch, a smart phone, a tablet computer, and a vehicle.
- the type of the speech-controlled device 110 may be indicated in a profile associated with the speech-controlled device 110 . For example, if the speech-controlled device 110 from which the audio data 111 was received is a smart watch or vehicle belonging to user A, the fact that the speech-controlled device 110 belongs to user A may increase user recognition confidence data associated with user A, but decrease user recognition confidence data associated with user B.
- the secondary data 809 may additionally include geographic coordinate data associated with the speech-controlled device 110 .
- a profile associated with a vehicle may indicate multiple users (e.g., user A and user B).
- the vehicle may include a global positioning system (GPS) indicating latitude and longitude coordinates of the vehicle when the audio data 111 is captured by the vehicle.
- GPS global positioning system
- Global coordinates and associated locations (e.g., work, home, etc.) may be indicated in a user profile associated with the speech-controlled device 110 .
- the global coordinates and associated locations may be associated with respective users in the user profile.
- the secondary data 809 may also include other data/signals about activity of a particular user that may be useful in performing user recognition of an input utterance. For example, if a user has recently entered a code to disable a home security alarm, and the utterance corresponds to a device at the home, signals from the home security alarm about the disabling user, time of disabling, etc. may be reflected in the secondary data 809 and considered by the user recognition component 802 .
- a mobile device such as a phone, Tile, dongle, or other device
- a mobile device such as a phone, Tile, dongle, or other device
- this may be reflected in the secondary data 809 and considered by the user recognition component 802 .
- the secondary data 809 may be configured to be included in the vector representation of the user recognition feature/vector data 609 (for example using encoder 750 ) so that all the data relating to the utterance to be processed by the scoring component 822 may be included in a single vector.
- the secondary data 809 may be reflected in one or more different data structures to be processed by the scoring component 822 .
- the ASR component 250 may output text data 300 , which in turn is processed by the NLU component 260 .
- the results 385 of NLU processing performed by the NLU component 260 and the user recognition confidence data 811 output by the user recognition component 802 may be sent to one or more applications, represented by command processor(s) 290 .
- the command processor(s) 290 that receives the NLU results 385 and the user recognition confidence score data 811 may be determined by the server(s) 120 as corresponding to content responsive to the utterance in the audio data 111 . For example, if the audio data 111 includes the utterance “Play my music,” the NLU results 385 and user recognition confidence data 811 may be sent to a music playing command processor 290 .
- the command processor(s) 290 may identify content responsive to the utterance and associated with the user. If, instead, user recognition confidence score data 811 associated with multiple users is passed to the command processor(s) 290 , the command processor(s) 290 may identify content responsive to the utterance and associated with the user associated with the highest recognition confidence.
- Models may be trained and operated according to various machine learning techniques.
- Such techniques may include, for example, neural networks (such as deep neural networks and/or recurrent neural networks), inference engines, trained classifiers, etc.
- trained classifiers include Support Vector Machines (SVMs), neural networks, decision trees, AdaBoost (short for “Adaptive Boosting”) combined with decision trees, and random forests. Focusing on SVM as an example, SVM is a supervised learning model with associated learning algorithms that analyze data and recognize patterns in the data, and which are commonly used for classification and regression analysis.
- an SVM training algorithm Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. More complex SVM models may be built with the training set identifying more than two categories, with the SVM determining which category is most similar to input data. An SVM model may be mapped so that the examples of the separate categories are divided by clear gaps. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gaps they fall on. Classifiers may issue a “score” indicating which category the data most closely matches. The score may provide an indication of how closely the data matches the category.
- Training a machine learning component such as, in this case, one of the first or second models, requires establishing a “ground truth” for the training examples.
- the term “ground truth” refers to the accuracy of a training set's classification for supervised learning techniques.
- Various techniques may be used to train the models including backpropagation, statistical learning, supervised learning, semi-supervised learning, stochastic learning, or other known techniques.
- FIG. 9 is a block diagram conceptually illustrating a user device 110 (e.g., the speech-controlled device 110 described herein) that may be used with the described system.
- FIG. 10 is a block diagram conceptually illustrating example components of a remote device, such as the server 120 that may assist with ASR processing, NLU processing, or command processing.
- Multiple servers 120 may be included in the system 100 , such as one server 120 for performing ASR, one server 120 for performing NLU, etc.
- each of these devices may include computer-readable and computer-executable instructions that reside on the respective device ( 110 / 120 ), as will be discussed further below.
- Each of these devices ( 110 / 120 ) may include one or more controllers/processors ( 904 / 1004 ), that may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory ( 906 / 1006 ) for storing data and instructions of the respective device.
- the memories ( 906 / 1006 ) may individually include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive (MRAM) and/or other types of memory.
- Each device ( 110 / 120 ) may also include a data storage component ( 908 / 1008 ), for storing data and controller/processor-executable instructions.
- Each data storage component may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc.
- Each device ( 110 / 120 ) may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.) through respective input/output device interfaces ( 902 / 1002 ).
- Computer instructions for operating each device ( 110 / 120 ) and its various components may be executed by the respective device's controller(s)/processor(s) ( 904 / 1004 ), using the memory ( 906 / 1006 ) as temporary “working” storage at runtime.
- a device's computer instructions may be stored in a non-transitory manner in non-volatile memory ( 906 / 1006 ), storage ( 908 / 1008 ), or an external device(s).
- some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software.
- Each device ( 110 / 120 ) includes input/output device interfaces ( 902 / 1002 ). A variety of components may be connected through the input/output device interfaces ( 902 / 1002 ), as will be discussed further below. Additionally, each device ( 110 / 120 ) may include an address/data bus ( 924 / 1024 ) for conveying data among components of the respective device. Each component within a device ( 110 / 120 ) may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus ( 924 / 1024 ).
- the device 110 may include input/output device interfaces 902 that connect to a variety of components such as an audio output component such as a speaker 101 , a wired headset or a wireless headset (not illustrated), or other component capable of outputting audio.
- the device 110 may also include an audio capture component.
- the audio capture component may be, for example, a microphone 103 or array of microphones, a wired headset or a wireless headset (not illustrated), etc.
- the microphone 103 may be configured to capture audio. If an array of microphones is included, approximate distance to a sound's point of origin may be determined by acoustic localization based on time and amplitude differences between sounds captured by different microphones of the array.
- the device 110 may additionally include an image or video capture component, such as the camera 115 .
- the camera 115 may be configured to capture data used to perform facial recognition, and ultimately user recognition.
- the input/output device interfaces 902 may connect to one or more networks 199 via a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc.
- WLAN wireless local area network
- LTE Long Term Evolution
- WiMAX WiMAX
- 3G 3G network
- the server 120 may include a user recognition component 802 configured to perform user recognition as described herein.
- the server 120 may also be configured with the TTS component 414 , AFE 256 , or other components as described herein.
- the device 110 and/or the server 120 may include an ASR component 250 .
- the ASR component 250 in the device 110 may be of limited or extended capabilities.
- the ASR component 250 may include the language models 254 stored in ASR model storage component 252 . If limited speech recognition is included, the ASR component 250 may be configured to identify a limited number of words, whereas extended speech recognition may be configured to recognize a much larger range of words.
- the device 110 and/or the server 120 may include a limited or extended NLU component 260 .
- the NLU component 260 in the device 110 may be of limited or extended capabilities.
- the NLU component 260 may comprise the name entity recognition component 262 , the intent classification component 264 , and/or other components.
- the NLU component 260 may also include a stored knowledge base and/or entity library, or those storages may be separately located.
- the device 110 and/or the server 120 may also include the command processor 290 configured to execute commands/functions associated with a spoken utterance as described herein.
- each of the devices may include different components for performing different aspects of the speech processing.
- the multiple devices may include overlapping components.
- the components of the device 110 and the server 120 are exemplary, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system.
- multiple devices may contain components of the system 100 and the devices may be connected over a network 199 .
- the network 199 may include a local or private network or may include a wide network such as the Internet.
- Devices may be connected to the network 199 through either wired or wireless connections.
- the speech-controlled device 110 a , a smart phone 110 b , a smart watch 110 c , a tablet computer 110 d , and/or a vehicle 110 e may be connected to the network 199 through a wireless service provider, over a WiFi or cellular network connection, or the like.
- Other devices are included as network-connected support devices, such as the server 120 or others.
- the support devices may connect to the network 199 through a wired connection or wireless connection.
- Networked devices may capture audio using one-or-more built-in or connected microphones 103 or audio capture devices, with processing performed by ASR, NLU, or other components of the same device or another device connected via the network 199 , such as the ASR component 250 , the NLU component 260 , etc. of one or more servers 120 .
- the concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, speech processing systems, object recognition (e.g., facial recognition) systems, and distributed computing environments.
- aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium.
- the computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure.
- the computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media.
- components of one or more of the components and engines may be implemented as in firmware or hardware, such as the AFE 256 , which comprises, among other things, analog and/or digital filters (e.g., filters configured as firmware to a digital signal processor (DSP)).
- DSP digital signal processor
- the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application is a continuation of, and claims the benefit of priority of, U.S. Non-Provisional patent application Ser. No. 16/726,051, entitled SPEECH BASED USER RECOGNITION, filed Dec. 23, 2019, which claims the benefit of priority of U.S. Non-Provisional patent application Ser. No. 15/388,458, entitled SPEECH BASED USER RECOGNITION, filed Dec. 22, 2016, now issued U.S. Pat. No. 10,522,134, the contents of which is expressly incorporated herein by reference in its entirety.
- Speech recognition systems have progressed to the point where humans can interact with computing devices by speaking. Such systems employ techniques to identify the words spoken by a human user based on the various qualities of a received audio input. Speech recognition combined with natural language understanding processing techniques enable speech-based user control of a computing device to perform tasks based on the user's spoken commands. The combination of speech recognition and natural language understanding processing techniques is referred to herein as speech processing. Speech processing may also involve converting a user's speech into text data which may then be provided to various text-based software applications.
- Speech processing may be used by computers, hand-held devices, telephone computer systems, kiosks, and a wide variety of other devices to improve human-computer interactions.
- For a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.
-
FIG. 1 illustrates a system for verifying an identity of a user that speaks an utterance according to embodiments of the present disclosure. -
FIG. 2 is a conceptual diagram of how a spoken utterance may be processed according to embodiments of the present disclosure. -
FIG. 3 is a conceptual diagram of a system architecture for parsing incoming utterances using multiple domains according to embodiments of the present disclosure. -
FIG. 4 is a conceptual diagram of how text-to-speech processing is performed according to embodiments of the present disclosure. -
FIG. 5 illustrates data stored and associated with user profiles according to embodiments of the present disclosure. -
FIG. 6 is a flow diagram illustrating processing performed to prepare audio data for ASR and user verification according to embodiments of the present disclosure. -
FIG. 7 is a diagram of a vector encoder according to embodiments of the present disclosure. -
FIG. 8 is a system flow diagram illustrating user verification according to embodiments of the present disclosure. -
FIG. 9 is a block diagram conceptually illustrating example components of a device according to embodiments of the present disclosure. -
FIG. 10 is a block diagram conceptually illustrating example components of a server according to embodiments of the present disclosure. -
FIG. 11 illustrates an example of a computer network for use with the system. - Automatic speech recognition (ASR) is a field of computer science, artificial intelligence, and linguistics concerned with transforming audio data associated with speech into text representative of that speech. Similarly, natural language understanding (NLU) is a field of computer science, artificial intelligence, and linguistics concerned with enabling computers to derive meaning from text input containing natural language. ASR and NLU are often used together as part of a speech processing system. Text-to-speech (TTS) is a field of concerning transforming textual data into audio data that is synthesized to resemble human speech.
- Speech processing systems have become robust platforms enabled to perform a variety of speech related tasks such as playing music, controlling household devices, communicating with other users, shopping, etc. Speech processing systems may process a spoken utterance to obtain content responsive thereto (for example output music, news content, or the like). Speech processing systems may also process a spoken utterance, and therefrom perform TTS processing to create computer-generated speech responsive to the spoken utterance thus enabling the system to engage in a conversation with a user and provide feedback and prompts in spoken form.
- In certain situations, it may be desirable for a speech processing system to verify the identity of a speaking user. Identification of a user refers to determining an identity of a user. Verification refers to confirming a previously determined identity of a user. Recognition of users refers to both identification and verification of a user. Some speech processing systems may perform user recognition through speech analysis (i.e., the system may verify whether a spoken utterance originated from a particular user by using the audio data of the utterance) and/or through different techniques. Such user verification may be desired prior to performing certain tasks such as controlling a home device, accessing a user's sensitive information, or addressing the user by name in a TTS response. Further, in other situations, two different users may speak the same utterance, but intend to receive different content in response thereto. Thus, it may be desirable for the system to determine who spoke the utterance (and verify such user) in order to properly process the command. For example, two different users may both state “Play my music,” with one of the users intending to have a classical music playlist played and the other user intending to have a rock music playlist played. As such, speech processing systems may utilize user recognition to influence which eventual content is determined to be responsive to a spoken utterance. Traditional user recognition determines a similarity between speech characteristics (i.e., features) of a spoken utterance and speech characteristics of test/training speech. The test/training speech is often spoken during enrollment of a user with the speech processing system. For example, when a user enrolls in the speech processing system, the system may request the user speak certain phrases, which are stored and used for comparison purposes during user recognition.
- The present disclosure improves traditional user recognition, and increases the accuracy thereof, by incorporating a confidence output of an ASR component into the user recognition analysis. Instead of simply using speech characteristics of a spoken utterance to determine a similarity with stored test/training speech characteristics, as is the case with traditional speech based user recognition, the present disclosure may use both the characteristics of the spoken utterance as well as ASR confidence output.
- The present disclosure also improves traditional user recognition by incorporating secondary information into the user recognition analysis. Such secondary information may include, for example, where an audio capture device is located within a building (e.g., home), a type of the audio capture device, a geographic location of the audio capture device, etc. Further, the secondary information may include image data (such as from a camera that may capture an image of a speaking user), other device data (such as knowing where a user's wearable device is relative to an audio capture device), or the like.
- The teachings of the present disclosure are beneficial because they decrease the amount of false user verifications, decrease the likelihood of unintentional divulgation of data to an incorrectly recognized user, and generally improve system performance.
-
FIG. 1 illustrates asystem 100 configured to recognize an identity of a user according to the present disclosure. Although the figures and discussion illustrate certain operational steps of thesystem 100 in a particular order, the steps described may be performed in a different order (as well as certain steps removed or added) without departing from the intent of the disclosure. As shown inFIG. 1 , thesystem 100 may include one or more speech-controlleddevices 110 local to auser 5, and one ormore servers 120 connected to the speech-controlled device(s) 110 across one ormore networks 199. The server(s) 120 (which may be one or more different physical devices) may be capable of performing speech processing (e.g., ASR, NLU, command processing, etc.), TTS processing, and user recognition as described herein. Asingle server 120 may perform all speech processing, TTS processing, and user recognition. Alternatively,multiple servers 120 may combine to perform all speech processing, TTS processing, and user recognition. Further, the server(s) 120 may execute certain commands, such as answering spoken utterances of theuser 5. In addition, certain speech detection or command execution functions may be performed by the speech-controlleddevice 110. - As illustrated in
FIG. 1 , during a training phase, the speech-controlleddevice 110 may capture various speech (i.e., input audio 11) of theuser 5 via amicrophone 103 of the speech-controlleddevice 110. For example, capturing of the training speech may occur as part of enrolling the user with the speech-controlleddevice 110/system 100. The speech-controlleddevice 110 may then send training data corresponding to the training speech to the server(s) 120. Alternatively, a microphone array (not illustrated), separate from the speech-controlleddevice 110, may capture the training speech. In an example, the microphone array is in direct communication with the speech-controlleddevice 110 such that when the microphone array captures the training speech, the microphone array sends the training data to the speech-controlleddevice 110. The speech-controlleddevice 110 may then forward the received training data to the server(s) 120. In another example, the microphone array is in indirect communication with the speech-controlleddevice 110 via a companion application of a mobile computing device, such as a smart phone, tablet, laptop, etc. In this example, when the microphone array captures the training speech, the microphone array sends the training data to the companion application, which forwards the training data to the speech-controlleddevice 110. The speech-controlleddevice 110 may then forward the training data to the server(s) 120. In yet another example, the microphone array is in indirect communication with the server(s) 120 via the companion application such that when the microphone array captures the training speech, the microphone array sends the training data to the companion application, which forwards the training data to the server(s) 120. - The server(s) 120 receives (150) the training data and associates (152) the training data with the
user 5 in a user profile associated with the speech-controlleddevice 110 from which the training data originated. It should be appreciated that the server(s) 120 may receive (150) training data from multiple speech-controlleddevices 110 of thesystem 100, and may store the training data with respective users and user profiles. - The server(s) 120 may simply store, in the user profile, waveforms of training data without determining features/vectors of the training data. In this example, features/vectors of the training data may be determined each time the server(s) 120 attempts to compare features/vectors of a spoken utterance to the training data. Alternatively, upon receiving (150) training data, the server(s) 120 may determine features/vectors of the training data and associate (152) the features/vectors of the training data with the user in the user profile. This allows the server(s) 120 to only determine the features/vectors of the training data once, thereby negating duplicative processing.
- During runtime, as shown in
FIG. 1 , themicrophone 103 of the speech-controlled device 110 (or a separate microphone array depending upon implementation) captures an utterance (i.e., input audio 11) spoken by theuser 5. The server(s) 120 receives (154) input audio data corresponding to the spoken utterance from the speech-controlleddevice 110 or a companion application (depending upon implementation as described herein above). The server(s) 120 may perform (156) ASR on the input audio data to determine ASR confidence data indicating an ASR confidence. The server(s) 120 may also perform (158) user recognition using the input audio data to determine user recognition confidence data indicating a user verification confidence. Performing user recognition may include comparing speech characteristics (i.e., features/vectors) of the input audio data to speech characteristics of training data stored in a user profile associated with the speech-controlleddevice 110 from which the input audio data was received. The server(s) 120 may perform (156) ASR and perform (158) user recognition in parallel. The server(s) 120 may also receive (160) secondary data, such as where the speech-controlled device 110 (or microphone array) is located within a building (e.g., home), a type of the speech-controlleddevice 110, and a geographic location of the speech-controlleddevice 110, for example. The server(s) 120 may use (162) at least one of the ASR confidence data or the secondary data to alter/modify the originally determined user recognition confidence data. - As described above, user recognition may be performed using training data captured while enrolling the user with the
system 100/speech-controlleddevice 110. However, it should be appreciated that user recognition may be performed without using training data captured during an enrollment process. For example, reference data used to perform user recognition may be captured during runtime (i.e., when the user interacts with thesystem 100 at runtime by, for example, speaking commands). - Further details of the
system 100 configured to verify a user that speaks an utterance are explained below, following a discussion of the overall speech processing system ofFIG. 2 . Thesystem 100 may operate using various speech processing components as described inFIG. 2 .FIG. 2 is a conceptual diagram of how a spoken utterance is processed. The various components illustrated may be located on a same or different physical devices. Communication between various components illustrated inFIG. 2 may occur directly or across anetwork 199. An audio capture component, such as themicrophone 103 of the speech-controlled device 110 (or other device), capturesinput audio 11 corresponding to a spoken utterance. Thedevice 110, using awakeword detection component 220, then processes audio data corresponding to theinput audio 11 to determine if a keyword (such as a wakeword) is detected in the audio data. Following detection of a wakeword, the speech-controlleddevice 110 sendsaudio data 111, corresponding to the utterance, to aserver 120 that includes anASR component 250. Theaudio data 111 may be output from an acoustic front end (AFE) 256 located on thedevice 110 prior to transmission, or theaudio data 111 may be in a different form for processing by aremote AFE 256, such as theAFE 256 located with theASR component 250. - The
wakeword detection component 220 works in conjunction with other components of thedevice 110, for example themicrophone 103 to detect keywords in audio data corresponding to theinput audio 11. For example, thedevice 110 may convertinput audio 11 into audio data, and process the audio data with thewakeword detection component 220 to determine whether speech is detected, and if so, if the audio data comprising speech matches an audio signature and/or model corresponding to a particular keyword. - The
device 110 may use various techniques to determine whether audio data includes speech. Some embodiments may apply voice activity detection (VAD) techniques. Such techniques may determine whether speech is present in input audio based on various quantitative aspects of the input audio, such as a spectral slope between one or more frames of the input audio; energy levels of the input audio in one or more spectral bands; signal-to-noise ratios of the input audio in one or more spectral bands; or other quantitative aspects. In other embodiments, thedevice 110 may implement a limited classifier configured to distinguish speech from background noise. The classifier may be implemented by techniques such as linear classifiers, support vector machines, and decision trees. In still other embodiments, Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM) techniques may be applied to compare the input audio to one or more acoustic models in speech storage, which acoustic models may include models corresponding to speech, noise (such as environmental noise or background noise), or silence. Still other techniques may be used to determine whether speech is present in the input audio. - Once speech is detected in the input audio, the
device 110 may use thewakeword detection component 220 to perform wakeword detection to determine when a user intends to speak a command to thedevice 110. This process may also be referred to as keyword detection, with the wakeword being a specific example of a keyword. Specifically, keyword detection is typically performed without performing linguistic analysis, textual analysis, or semantic analysis. Instead, incoming audio (or audio data) is analyzed to determine if specific characteristics of the audio match preconfigured acoustic waveforms, audio signatures, or other data to determine if the incoming audio “matches” stored audio data corresponding to a keyword. - Thus, the
wakeword detection component 220 may compare audio data to stored models or data to detect a wakeword. One approach for wakeword detection applies general large vocabulary continuous speech recognition (LVCSR) systems to decode the audio signals, with wakeword searching conducted in the resulting lattices or confusion networks. LVCSR decoding may require relatively high computational resources. Another approach for wakeword spotting builds HMMs for each wakeword and non-wakeword speech signals respectively. The non-wakeword speech includes other spoken words, background noise, etc. There can be one or more HMMs built to model the non-wakeword speech characteristics, which are named filler models. Viterbi decoding is used to search the best path in the decoding graph, and the decoding output is further processed to make the decision on keyword presence. This approach can be extended to include discriminative information by incorporating a hybrid deep neural network (DNN)-HMM decoding framework. In another embodiment the wakeword spotting system may be built on DNN/recursive neural network (RNN) structures directly, without HMI involved. Such a system may estimate the posteriors of wakewords with context information, either by stacking frames within a context window for DNN, or using RNN. Follow-on posterior threshold tuning or smoothing is applied for decision making. Other techniques for wakeword detection, such as those known in the art, may also be used. - Once the wakeword is detected, the
local device 110 may “wake” and begin transmittingaudio data 111 corresponding to input audio 11 to the server(s) 120 for speech processing (e.g., for purposes of executing a command in the speech). Theaudio data 111 may include data corresponding to the wakeword, or the portion of the audio data corresponding to the wakeword may be removed by thelocal device 110 prior to sending theaudio data 111 to theserver 120. - Upon receipt by the server(s) 120, an
ASR component 250 may convert theaudio data 111 into text data. TheASR component 250 transcribes theaudio data 111 into text data representing words of speech contained in theaudio data 111. The text data may then be used by other components for various purposes, such as executing system commands, inputting data, etc. A spoken utterance in theaudio data 111 is input to a processor configured to perform ASR, which then interprets the spoken utterance based on a similarity between the spoken utterance and pre-established language models 254 stored in an ASR model knowledge base (i.e., ASR model storage 252). For example, theASR component 250 may compare theaudio data 111 with models for sounds (e.g., subword units or phonemes) and sequences of sounds to identify words that match the sequence of sounds spoken in the spoken utterance of theaudio data 111. - The different ways a spoken utterance may be interpreted (i.e., the different hypotheses) may each be assigned a respective probability/confidence score representing a likelihood that a particular set of words matches those spoken in the spoken utterance. The confidence score may be based on a number of factors including, for example, a similarity of the sound in the spoken utterance to models for language sounds (e.g., an acoustic model 253 stored in the ASR model storage 252), and a likelihood that a particular word that matches the sound would be included in the sentence at the specific location (e.g., using a language model 254 stored in the ASR model storage 252). Thus, each potential textual interpretation of the spoken utterance (i.e., hypothesis) is associated with a confidence score. Based on the considered factors and the assigned confidence score, the
ASR component 250 outputs the most likely text recognized in theaudio data 111. TheASR component 250 may also output multiple hypotheses in the form of a lattice or an N-best list with each hypothesis corresponding to a confidence score or other score (e.g., such as probability scores, etc.). - The device or devices including the
ASR component 250 may include anAFE 256 and aspeech recognition engine 258. TheAFE 256 transforms theaudio data 111 into data for processing by thespeech recognition engine 258. Such transformation is discussed in further detail with regard toFIG. 6 below. Thespeech recognition engine 258 compares the speech recognition data with acoustic models 253, language models 254, and other data models and information for recognizing the speech conveyed in theaudio data 111. TheAFE 256 may reduce noise in theaudio data 111 and divide the digitizedaudio data 111 into frames representing time intervals for which theAFE 256 determines a number of values (i.e., features) representing qualities of theaudio data 111, along with a set of those values (i.e., a feature vector or audio feature vector) representing features/qualities of theaudio data 111 within each frame. In one configuration each audio frame includes 25 ms of audio and the frames start at 10 ms intervals resulting in a sliding window where adjacent audio frames include 15 ms of overlapping audio. Many different features may be determined, as known in the art, and each feature represents some quality of theaudio data 111 that may be useful for ASR processing. A number of approaches may be used by theAFE 256 to process theaudio data 111, such as mel-frequency cepstral coefficients (MFCCs), perceptual linear predictive (PLP) techniques, neural network feature vector techniques, linear discriminant analysis, semi-tied covariance matrices, or other approaches known to those skilled in the art. - The
speech recognition engine 258 may process data output from theAFE 256 with reference to information stored in theASR model storage 252. Alternatively, post-AFE processed data (e.g., feature vectors) may be received by the device executing ASR processing from another source besides theinternal AFE 256. For example, the speech-controlleddevice 110 may processaudio data 111 into feature vectors (e.g., using an on-device AFE 256) and transmit the feature vector data to theserver 120 across thenetwork 199 for ASR processing. Feature vector data may arrive at theserver 120 encoded, in which case it may be decoded prior to processing by the processor executing thespeech recognition engine 258. - The
speech recognition engine 258 attempts to match received feature vectors to language phonemes and words as known in the stored acoustic models 253 and language models 254. Thespeech recognition engine 258 computes recognition scores for the feature vectors based on acoustic information and language information. The acoustic information is used to calculate an acoustic score representing a likelihood that the intended sound represented by a group of feature vectors matches a language phoneme. The language information is used to adjust the acoustic score by considering what sounds and/or words are used in context with each other, thereby improving a likelihood that theASR component 250 will output speech results that make sense grammatically. - The
speech recognition engine 258 may use a number of techniques to match feature vectors to phonemes, for example using HMMs to determine probabilities that feature vectors may match phonemes. Sounds received may be represented as paths between states of the HMM and multiple paths may represent multiple possible text matches for the same sound. - Following ASR processing, the ASR results may be sent by the
speech recognition engine 258 to other processing components, which may be local to the device performing ASR and/or distributed across the network(s) 199. For example, ASR results in the form of a single textual representation of the speech, an N-best list including multiple hypotheses and respective scores, lattice, etc. may be sent to a server, such as theserver 120, for natural language understanding (NLU) processing, such as conversion of the text data into commands for execution, either by the speech-controlleddevice 110, by theserver 120, or by another device (e.g., a server running a search engine, etc.). For example, theASR component 250 mayoutput text data 300 for further processing by anNLU 260, where thetext data 300 may include a single top scoring hypothesis or a N-best list including multiple hypotheses. Further, theASR component 250 may output ASR confidence scoredata 807 for further processing by a user verification component 802 (discussed below) or other component. The ASRconfidence score data 807 may include a respective score for each hypothesis in an N-best list or may include a single score for the top hypothesis output as thetext data 300. In other configurations the ASRconfidence score data 807 may include general confidence data, such as one or more values that indicate how generally confident theASR component 250 was in its processing, without necessarily linking that confidence to a specific hypothesis. The ASRconfidence score data 807 may be based on various factors such as audio quality, whether the hypotheses had similar scores or whether one hypothesis largely outscored the others, or other factors. - The device performing NLU processing (e.g., the server 120) may include various components, including potentially dedicated processor(s), memory, storage, etc. The device performing NLU processing may include a dedicated NLU component/
component 260, which may include a named entity recognition (NER)component 262, and intent classification (IC)component 264. The device performing NLU processing may additionally includeNLU storage 273, and a knowledge base (not illustrated). The knowledge base is a database or other information storage that may include information about entities that may be used in resolving spoken utterances. TheNLU component 260 may also utilize gazetteer information 284 stored in anentity library storage 282. The knowledge base and/or gazetteer information 284 may be used for entity resolution, for example matching ASR results with different entities (e.g., song titles, contact names, etc.). Gazetteers 284 may be linked to users (e.g., a particular gazetteer may be associated with a specific user's music collection), may be linked to certain domains (e.g., shopping), or may be organized in a variety of other ways. - The
NLU component 260 takes text data (e.g., output from theASR component 250 based on the input audio data 111) and attempts to make a semantic interpretation of the text data. That is, theNLU component 260 determines the meaning behind the text data based on the individual words and then implements that meaning. TheNLU component 260 interprets a text string to derive an intent or a desired action from the user as well as the pertinent pieces of information in the text data that allow a device (e.g., the speech-controlleddevice 110, theserver 120, an application server, etc.) to complete that action. For example, if a spoken utterance is processed using theASR component 250, which outputs the text data “call mom”, theNLU component 260 may determine the user intended to activate a telephone in his/her device and to initiate a call with a contact matching the entity “mom.” - The
NLU component 260 may process several textual inputs related to the same utterance. For example, if theASR component 250 outputs N text segments (e.g., as part of an N-best list), theNLU component 260 may process all N outputs to obtain NLU results. - The
NLU component 260 may be configured to parse and tag to annotate text data as part of NLU processing. For example, for the text data “call mom,” “call” may be tagged as a command (e.g., to execute a phone call) and “mom” may be tagged as a specific entity and target of the command. In addition, the telephone number for the entity corresponding to “mom” stored in a contact list may be included in the annotated NLU results. - To correctly perform NLU processing of speech input, the
NLU component 260 may be configured to determine a “domain” of the utterance so as to determine and narrow down which services offered by an endpoint device (e.g., theserver 120, the speech-controlleddevice 110, an application server, etc.) may be relevant. For example, an endpoint device may offer services relating to interactions with a telephone service, a contact list service, a calendar/scheduling service, a music player service, etc. Words in text data may implicate more than one service, and some services may be functionally linked (e.g., both a telephone service and a calendar service may utilize data from the contact list). - The
NER component 262 receives an utterance in the form of ASR results and attempts to identify relevant grammars and lexical information that may be used to construe meaning. To do so, theNER component 262 may begin by identifying potential domains that may relate to the received utterance. TheNLU storage 273 includes a database of domains 274 associated with specific devices. For example, the speech-controlleddevice 110 may be associated with domains for music, telephony, calendaring, contact lists, and device-specific communications. In addition, theentity library 282 may include database entries about specific services on a specific device, either indexed by Device ID, User ID, Household ID, or some other indicator. - A domain may represent a discrete set of activities having a common theme, such as “shopping”, “music”, “calendaring”, etc. As such, each domain may be associated with a particular language model and/or grammar database 276, a particular set of intents/actions 278, and/or a particular personalized lexicon 286. Each gazetteer 284 may include domain-indexed lexical information associated with a particular user and/or device. For example, the
Gazetteer A 284 a includes domain-index lexical information 286 aa to 286 an. A user's music-domain lexical information might include album titles, artist names, and song names, for example, whereas a user's contact-list lexical information might include the names of contacts. Since every user's music collection and contact list is presumably different, this personalized information improves entity resolution. - An utterance may be processed applying the rules, models, and information applicable to each identified domain. For example, if an utterance potentially implicates both communications and music, the utterance will be NLU processed using the grammar models and lexical information for communications, and will also be processed using the grammar models and lexical information for music. The responses to the spoken utterance produced by each set of models is scored, with the overall highest ranked result from all applied domains being ordinarily selected to be the correct result. This is described further in detail below in reference to
FIG. 3 . - An
IC component 264 parses the utterance to determine an intent(s) for each identified domain, where the intent(s) corresponds to the action to be performed that is responsive to the spoken utterance. Each domain is associated with a database 278 of words linked to intents. For example, a music intent database may link words and phrases such as “quiet,” “volume off,” and “mute” to a “mute” intent. TheIC component 264 identifies potential intents for each identified domain by comparing words in the utterance to the words and phrases in the intents database 278. - In order to generate a particular interpreted response, the
NER component 262 applies the grammar models and lexical information associated with the respective domain. Each grammar model 276 includes the names of entities (i.e., nouns) commonly found in speech about the particular domain (i.e., generic terms), whereas the lexical information 286 from the gazetteer 284 is personalized to the user and/or the device. For instance, a grammar model 276 associated with a shopping domain may include a database of words commonly used when people discuss shopping. - The intents identified by the
IC component 264 are linked to domain-specific grammar frameworks (included in 276) with “slots” or “fields” to be filled. For example, if “play music” is an identified intent, a grammar framework(s) may correspond to sentence structures such as “Play {Artist Name},” “Play {Album Name},” “Play {Song name},” “Play {Song name} by {Artist Name},” etc. However, to make recognition more flexible, these frameworks would ordinarily not be structured as sentences, but rather based on associating slots with grammatical tags. - For example, the
NER component 262 may parse the spoken utterance to identify words as subject, object, verb, preposition, etc., based on grammar rules and models, prior to recognizing named entities. The identified verb may be used by theIC component 264 to identify intent, which is then used by theNER component 262 to identify frameworks. A framework for an intent of “play” may specify a list of slots/fields applicable to play the identified “object” and any object modifier (e.g., a prepositional phrase), such as {Artist Name}, {Album Name}, {Song name}, etc. TheNER component 262 then searches the corresponding fields in the domain-specific and personalized lexicon(s), attempting to match words and phrases in the utterance tagged as a grammatical object or object modifier with those identified in the database(s). - This process includes semantic tagging, which is the labeling of a word or combination of words according to their type/semantic meaning. Parsing may be performed using heuristic grammar rules, or the
NER component 262 may be constructed using techniques such as HMIs, maximum entropy models, log linear models, conditional random fields (CRF), and the like. - For instance, an utterance of “play mother's little helper by the rolling stones” might be parsed and tagged as {Verb}: “Play,” {Object}: “mother's little helper,” {Object Preposition}: “by,” and {Object Modifier}: “the rolling stones.” At this point in the process, “Play” is identified as a verb based on a word database associated with the music domain, which the
IC component 264 will determine corresponds to the “play music” intent. No determination has been made as to the meaning of “mother's little helper” and “the rolling stones,” but based on grammar rules and models, it is determined that these phrases relate to the grammatical object of the spoken utterance. - The frameworks linked to the intent are then used to determine what database fields should be searched to determine the meaning of these phrases, such as searching a user's gazette for similarity with the framework slots. So a framework for “play music intent” might indicate to attempt to resolve the identified object based on {Artist Name}, {Album Name}, and {Song name}, and another framework for the same intent might indicate to attempt to resolve the object modifier based on {Artist Name}, and resolve the object based on {Album Name} and {Song Name} linked to the identified {Artist Name}. If the search of the gazetteer does not resolve the slot/field using gazetteer information, the
NER component 262 may search the database of generic words associated with the domain (i.e., in the NLU storage 273). For instance, if the utterance was “play songs by the rolling stones,” after failing to determine an album name or song name called “songs” by “the rolling stones,” theNER component 262 may search the domain vocabulary for the word “songs.” In the alternative, generic words may be checked before the gazetteer information, or both may be tried, potentially producing two different results. - The comparison process used by the
NER component 262 may classify (i.e., score) how closely a database entry compares to a tagged utterance word or phrase, how closely the grammatical structure of the utterance corresponds to the applied grammatical framework, and based on whether the database indicates a relationship between an entry and information identified to fill other slots of the framework. - The
NER component 262 may also use contextual operational rules to fill slots. For example, if a user had previously requested to pause a particular song and thereafter requested that the speech-controlleddevice 110 “please un-pause my music,” theNER component 262 may apply an inference-based rule to fill a slot associated with the name of the song that the user currently wishes to play—namely the song that was playing at the time the user requested to pause the music. - The results of NLU processing may be tagged to attribute meaning to the utterance. So, for example, “play mother's little helper by the rolling stones” might produce a result of: {domain} Music, {intent} Play Music, {artist name} “rolling stones,” {media type} SONG, and {song title} “mother's little helper.” As another example, “play songs by the rolling stones” might produce: {domain} Music, {intent} Play Music, {artist name} “rolling stones,” and {media type} SONG.
- The output from the
ASR component 250 may be sent to auser recognition component 802. Theuser recognition component 802 performs user recognition using theaudio data 111, and optionally the ASR component output. Theuser recognition component 802 may include a scoring component that determines respective scores indicating whether the input utterance in theaudio data 111 was spoken by particular users. Theuser recognition component 802 may also include a confidence component that determines an overall confidence as the accuracy of user recognition operations. - The output from the NLU processing, which may include tagged text data, commands, etc., and output of the user recognition component 802 (e.g., user recognition confidence data) may then be sent to a
command processor 290, which may be located on a same orseparate server 120 as part of thesystem 100. Thesystem 100 may include more than onecommand processor 290, and thecommand processor 290 may be determined based on the NLU output. For example, if the NLU output includes a command to play music, thecommand processor 290 selected may correspond to a music playing application, such as one located on the speech-controlleddevice 110 or in a music playing appliance, configured to execute a music playing command. Manysuch command processors 290 may be available to the system depending on the various applications that may be invoked. If the NLU output includes a search utterance (e.g., requesting the return of search results), thecommand processor 290 selected may include a search engine processor, such as one located on a search server, configured to execute a search command and determine search results, which may include output text data to be processed by a TTS engine and output from a device as synthesized speech. - The NLU operations of the
system 100 may take the form of a multi-domain architecture, such as that illustrated inFIG. 3 . In the illustrated architecture, each domain (which may include a set of intents and entity slots that define a larger concept such as music, books etc. as well as components such as trained models, etc. used to perform various NLU operations such as NER, IC, or the like) may be constructed separately and made available to theNLU component 260 during runtime operations where NLU operations are performed on text (such as text output from the ASR component 250). Each domain may have specially configured components to perform various steps of the NLU operations. - For example, the
system 100 may include a multi-domain architecture consisting of multiple domains for intents/commands executable by the system 100 (or by other devices connected to the system 100), such as music, video, books, and information. Thesystem 100 may include a plurality ofdomain recognizers 335, where each domain may include itsown recognizer 263. Each recognizer may include various NLU components such as anNER component 262,IC component 264 and other components such as an entity resolver, or other components. - For example, a music domain recognizer 263-A (Domain A) may have an NER component 262-A that identifies what slots (i.e., portions of input text data 300) may correspond to particular words relevant to that domain. The words may correspond to entities such as (for the music domain) a performer, album name, song name, etc. An
NER component 262 may use a machine learning model, such as a domain specific conditional random field (CRF) to both identify the portions corresponding to an entity as well as identify what type of entity corresponds to the text data portion. For example, for the text “play songs by the stones,” an NER 262-A trained for a music domain may recognize the portion of text [the stones] corresponds to an entity and an artist name. The music domain recognizer 263-A may also have its own intent classification (IC) component 264-A that determines the intent of the text assuming that the text is within the proscribed domain. AnIC component 264 may use a model, such as a domain specific maximum entropy classifier to identify the intent of the text, where the intent is the action the user desires thesystem 100 to perform. - As illustrated in
FIG. 3 , multiple domains may operate substantially in parallel, with different domain specific components. That is, domain B for video may have its own recognizer 263-B including NER component 262-B, and IC component 264-B. Domain C for books may also have similar components in its recognizer 263-C, and so on for the different domains available to the system. When input text data 300 (e.g., ASR output text data) is received, the same text data that is input into the NLU pipeline for domain A 263-A may also be input into the NLU pipeline for domain B 263-B, where the components for domain B 263-B will operate on the text data as if the text data related to domain B, and so on for the different NLU pipelines for the different domains. Each domain specific NLU pipeline will create its own domain specific NLU results, for example NLU results A (for domain A), NLU results B (for domain B), NLU results C (for domain C), and so on. The different NLU results may then be ranked and further processed using other downstream components as explained below. - As shown in
FIG. 3 , anNER component 262 andIC component 264 may be considered part of a recognizer (such as recognizer 263-A, 263-B, 263-C, etc.). The recognizers may operate using machine learning trained models such as a CRF, maximum entropy classifier, neural network (such as a deep neural network (DNN) or recurrent neural network (RNN)), or other classifier. Therecognizers 335 may also use rules that operate on input query text data in a particular form to identify named entities and/or intents. Therecognizers 335 may also operate using a data structure such as a finite state transducer (FST) to process the query text data to perform NER and/or IC. Other techniques or models may also be used to perform NER and/or IC. The techniques may also be used together. For example, a set of rules, an FST, and a trained machine learning model may all operate on input text data substantially in parallel to determine the named entities/intents of an input utterance. If one technique performs its task with high enough confidence, thesystem 100 may use the output of that technique over the others. Thesystem 100 may also prioritize the results of different techniques in certain circumstances (for example rules results may be higher priority than FST results, which may be higher priority than model results, or some other configuration). Each recognizer (such as 263-A, 263-B, 263-C, etc.) may have its own rules, FSTs, and/or models operating such that each recognizer operates substantially in parallel to the other recognizers to come up with its own interpretation of the input text data. - The output of each
recognizer 335 is a N-best list 340 of intents and identified slots representing the particular recognizer's top choices as to the meaning of the input text data, along with scores for each item in the N-best list 340. For example, forinput text data 300 of “play poker face by lady gaga,” the music domain recognizer 263-A may output an N-best list 340 in the form of: - [0.95] PlayMusicIntent ArtistName: Lady Gaga SongName: Poker Face
- [0.02] PlayMusicIntent ArtistName: Lady Gaga
- [0.01] PlayMusicIntent ArtistName: Lady Gaga AlbumName: Poker Face
- [0.01] PlayMusicIntent SongName: Pokerface
- where the NER component 262-A of the recognizer 263-A has determined that for different items in the N-
best list 340, the words “poker face” correspond to a slot and the words “lady gaga” correspond to a slot. (Though different items in the N-best list 340 interpret those slots differently, for example labeling “poker face” as a song name in one choice but labeling it as an album name in another.) The IC component 264-A of the recognizer 263-A has also determined that the intent of theinput text data 300 is a PlayMusicIntent (and selected that as the intent for each item on the music N-best list 340). The recognizer 263-A also determined a score for each item on the list representing the recognizer's confidence that the particular item is correct. As can be seen in the example, the top item has the highest score. Each recognizer of therecognizers 335 may operate on theinput text data 300 substantially in parallel, resulting in a number of different N-best lists 340, one for each domain (e.g., one N-best 340 list for music, one N-best list 340 for video, etc.). The size of any particular N-best list 340 output from a particular recognizer is configurable and may be different across domains. - While the
recognizers 335 perform NER, that is they may identify words of theinput text data 300 that are important for downstream processing (sometimes called light slot filling), and may even label those words according to type (such as artist name, album name, city, or the like), therecognizers 335 may not perform entity resolution (i.e., determining the actual entity corresponding to the words of the input text data 300). Entity resolution is typically a higher latency process and involves communications with a knowledge base 272 or other component to precisely identify the specific entities. As this process is resource intensive, it may be preferable to not perform this task for each item of every N-best list 340 across the multiple domains as some items have low scores and are unlikely to be used and any resources spent performing entity resolution would be wasted on low scoring items. Thus, a filtering of potential results may first be performed before engaging in more resource intensive processing. To that end, the cumulative N-best lists 340 may be passed to a crossdomain processing component 355 which may then further rank the individual items on the N-best lists 340 as well as perform other operations. - The cross
domain processing component 355 may include across-domain ranker 350. Thecross-domain ranker 350 takes the group of N-best lists 340 and selects from among thelists 340 the top choices to create a new N-best list 360 that may include items from different domains, but only includes the highest scoring ones of those domains. The purpose of thecross-domain ranker 350 is to create a new list of top scoring potential results, so that downstream (more resource intensive) processes may only operate on the top choices. - As an example of a multi-domain N-
best list 360 created by thecross-domain ranker 350, take the exampleinput text data 300 of “play the hunger games.” Thetext data 300 may be processed by each of therecognizers 335, and each will output an N-best list 340, resulting in a group of N-best lists 340 input into the crossdomain processing component 355. Thecross-domain ranker 350 may then rank the individual items among the N-best lists 340 to create a new N-best list 360. For example, thecross-domain ranker 350 may output an N-best list 360 in the form of: - [0.78] Video PlayVideoIntent VideoName: The Hunger Games
- [0.13] Books ReadBookIntent BookName: The Hunger Games
- [0.07] Music PlayMusicIntent AlbumName: Hunger Games
- where the top items from different N-
best lists 340 from multiple domains are grouped into a single N-best list 360. As shown, the top scoring item is from the video domain 263-B, which includes the intent “playvideointent” and a slot labeled as video name corresponding to the text “the hunger games.” The next item is from the books domain 263-C, and includes the intent “readbookintent” and a slot labeled as book name corresponding to the text “the hunger games.” Each item in the N-best list 360 may also include a score. The size of the cross domain N-best list 360 is configurable. - While the
cross-domain ranker 350 takes as input the N-best lists 340, it may also consider other information, such asother data 391. - The cross
domain processing component 355 may also include a lightslot filler component 352. Thelight slot filler 352 can take information from slots and alter it to make the data more easily processed by downstream components The operations of thelight slot filler 352 are typically low latency operations that do not involve heavy operations such as reference to a knowledge base. The purpose of thelight slot filler 352 is to replace words with other words or values that may be more easily understood by downstream components. For example, ifinput text data 300 included the word “tomorrow”, thelight slot filler 352 may replace the word “tomorrow” with an actual date for purposes of downstream processing. Similarly, a word “CD” may be replaced by a word “album.” The replaced words are then included in the cross domain N-best list 360. - The N-
best list 360 is then output to a heavy slot filler andentity resolution component 370. The heavy slot filler andentity resolver 370 can apply rules or other instructions to standardize labels or tokens from previous stages into an intent/slot representation. The precise transformation may depend on the domain (for example, for a travel domain a text mention of “Boston airport” may be transformed to the standard BOS three-letter code referring to the airport). Theresolver 370 can refer to an authority source (such as a knowledge base 272) that is used to specifically identify the precise entity referred to in the entity mention identified in the incoming text. Specific intent/slot combinations may also be tied to a particular source, which may then be used to resolve the text (such as by providing information or a command to be executed in response to a user query). For example, in the example “play songs by the stones,” theresolver 370 may reference to a personal music catalog, Amazon Music account, user profile (discussed in detail below), or the like. The output from theentity resolution component 370 may include an altered N-best list that is based on the cross-domain N-best list 360 but also includes more detailed information about the specific entities mentioned in the text data 300 (such as specific entity IDs) and/or more detailed slot data that can eventually be used by acommand processor 290 which may be incorporated into the same system components or pipeline or may be on a separate device in communication with thesystem 100. While illustrated as a cross-domain resolver,multiple resolution components 370 may exist where aparticular resolution component 370 may be specific for one or more domains. - As can be appreciated, the
entity resolver 370 may not necessarily be successful in resolving every entity and filling every slot. This may result in incomplete results in the combined N-best list 360. A re-scorer andfinal ranker component 390 may consider such errors when determining how to rank the ultimate results for potential execution. For example, if an item of the N-best list 360 comes from a book domain and includes a read book intent, but theentity resolver 370 cannot find a book with a title matching theinput text data 300, that particular result may be re-scored by thefinal ranker 390 to be given a lower score. Each item considered by thefinal ranker 390 may also be assigned a particular confidence, where the confidence may be determined by arecognizer 335,cross domain processor 355, or by thefinal ranker 390 itself. Those confidence scores may be used to determine how to rank the individual NLU results represented in the N-best lists 340. The confidence scores may be affected by unfilled slots. For example, if one domain is capable of filling a slot (i.e., resolving the word in the slot to an entity or other recognizable form) for an input query, the results from that domain may have a higher confidence than those from a different domain that is not capable of filling a slot. - The
final ranker 390 may be configured to apply re-scoring, biasing, or other techniques to obtain the most preferred ultimate result. To do so, thefinal ranker 390 may consider not only the NLU results of the N-best lists 340, but may also considerother data 391. Thisother data 391 may include a variety of information. For example, theother data 391 may include application rating or popularity. For example, if one application has a particularly high rating, thesystem 100 may increase the score of results associated with that particular application. Theother data 391 may also include information about applications that have been specifically enabled by the user (as indicated in a user profile as discussed in detail below). NLU results from enabled applications may be scored higher than results from non-enabled applications. User history may also be considered, such as if the user regularly uses a particular supplemental application or does so at particular times of day. Date, time, location, weather, type ofdevice 110, customer ID, context, and other information may also be considered. For example, thesystem 100 may consider when any particular applications are currently active (such as music being played, a game being played, etc.) between thesystem 100 anddevice 110. The highest scoring result (or results in the case of multiple commands being in an utterance) may be passed to adownstream command processor 290 for execution. - Following final ranking, the
NLU component 260 may outputNLU output data 385. TheNLU output data 385 may include an indicator of the command to be executed along with data associated with the command, for example an indication that the command is “play music” and the music to be played is “Adele.” TheNLU output data 385 may be in the form of previous NLU data such as item(s) in N-best list 340, item(s) in cross-domain N-best list 360, or the like. TheNLU output data 385 may also be in a format executable by thecommand processor 290. Multiple instances of NLU output data (e.g., 385 a-385 n) may be output. - A
TTS component 414 may receive tagged text data from thecommand processor 290, so theTTS component 414 may synthesize speech corresponding to the text data. Speech may be synthesized by theTTS component 414 as described below with respect toFIG. 4 . - The
TTS component 414 includes a TTS front end (TTSFE) 416, aspeech synthesis engine 418, and aTTS storage 420. TheTTSFE 416 transforms input text data (e.g., from the command processor 290) into a symbolic linguistic representation for processing by thespeech synthesis engine 418. TheTTSFE 416 may also process tags or other data input to theTTS component 414 that indicate how specific words should be pronounced. Thespeech synthesis engine 418 compares the annotated phonetic units and information stored in theTTS storage 420 for converting the input text data into speech (i.e., audio data). TheTTSFE 416 and thespeech synthesis engine 418 may include their own controller(s)/processor(s) and memory, or they may use the controller/processor and memory of theserver 120, the speech-controlleddevice 110, or another device, for example. Similarly, the instructions for operating theTTSFE 416 and thespeech synthesis engine 418 may be located within theTTS component 414, within the memory and/or storage of theserver 120, the speech-controlleddevice 110, or within an external device. - Text data input into the
TTS component 414 may be sent to theTTSFE 416 for processing. TheTTSFE 416 may include components for performing text normalization, linguistic analysis, and linguistic prosody generation. During text normalization, theTTSFE 416 processes the input text data and generates standard/normalized text data, converting such things as numbers, abbreviations (e.g., Apt., St., etc.), and symbols ($, %, etc.) into the equivalent of written out words. - During linguistic analysis, the
TTSFE 416 analyzes language in the normalized text data to generate a sequence of phonetic units corresponding to the text data. This process may be referred to as phonetic transcription. Phonetic units include symbolic representations of sound units to be eventually combined and output by thesystem 100 as speech. Various sound units may be used for dividing text data for purposes of speech synthesis. TheTTS component 414 may process speech based on phonemes (i.e., individual sounds), half-phonemes, di-phones (i.e., the last half of one phoneme coupled with the first half of an adjacent phoneme), bi-phones (i.e., two consecutive phonemes), syllables, words, phrases, sentences, or other units. Each word may be mapped to one or more phonetic units. Such mapping may be performed using a language dictionary stored by thesystem 100, for example in theTTS storage 420. The linguistic analysis performed by theTTSFE 416 may also identify different grammatical components such as prefixes, suffixes, phrases, punctuation, syntactic boundaries, or the like. Such grammatical components may be used by theTTS component 414 to craft a natural sounding audio waveform output. The language dictionary may also include letter-to-sound rules and other tools that may be used to pronounce previously unidentified words or letter combinations that may be encountered by theTTS component 414. Generally, the more information included in the language dictionary, the higher quality the speech output. - Based on the linguistic analysis, the
TTSFE 416 may then perform linguistic prosody generation where the phonetic units are annotated with desired prosodic characteristics, also called acoustic features, which indicate how the desired phonetic units are to be pronounced in the eventual output speech. During this stage, theTTSFE 416 may consider and incorporate any prosodic annotations that accompanied the text data input to theTTS component 414. Such acoustic features may include pitch, energy, duration, and the like. Application of acoustic features may be based on prosodic models available to theTTS component 414. Such prosodic models indicate how specific phonetic units are to be pronounced in certain circumstances. A prosodic model may consider, for example, a phoneme's position in a syllable, a syllable's position in a word, a word's position in a sentence or phrase, neighboring phonetic units, etc. As with the language dictionary, a prosodic model with more information may result in higher quality speech output than prosodic models with less information. - The output of the
TTSFE 416, referred to as a symbolic linguistic representation, may include a sequence of phonetic units annotated with prosodic characteristics. This symbolic linguistic representation may be sent to thespeech synthesis engine 418, also known as a synthesizer, for conversion into an audio waveform of speech for output to an audio output device and eventually to a user. Thespeech synthesis engine 418 may be configured to convert the input text data into high-quality natural-sounding speech in an efficient manner. Such high-quality speech may be configured to sound as much like a human speaker as possible, or may be configured to be understandable to a listener without attempts to mimic a precise human voice. - The
speech synthesis engine 418 may perform speech synthesis using one or more different methods. In one method of synthesis called unit selection, aunit selection engine 430 matches the symbolic linguistic representation created by theTTSFE 416 against a database of recorded speech, such as a database of a voice corpus (e.g., the TTS voice unit storage 472). Theunit selection engine 430 matches the symbolic linguistic representation against spoken audio units in the database. Matching units are selected and concatenated together to form a speech output. Each unit includes an audio waveform corresponding with a phonetic unit, such as a short .wav file of the specific sound, along with a description of the various acoustic features associated with the .wav file (e.g., its pitch, energy, etc.), as well as other information, such as where the phonetic unit appears in a word, sentence, or phrase, neighboring phonetic units, etc. Using all the information in the unit database, theunit selection engine 430 may match units to the input text data to create a natural sounding waveform. The unit database may include multiple examples of phonetic units to provide thesystem 100 with many different options for concatenating units into speech. One benefit of unit selection is that, depending on the size of the database, a natural sounding speech output may be generated. As described above, the larger the unit database of the voice corpus, the more likely thesystem 100 will be able to construct natural sounding speech. - In another method of synthesis, called parametric synthesis, parameters such as frequency, volume, and noise are varied by a
parametric synthesis engine 432, a digital signal processor, or other audio generation device to create an artificial speech waveform output. Parametric synthesis uses a computerized voice generator, sometimes called a vocoder. Parametric synthesis may use an acoustic model and various statistical techniques to match a symbolic linguistic representation with desired output speech parameters. Parametric synthesis may include the ability to be accurate at high processing speeds, as well as the ability to process speech without large databases associated with unit selection, but also typically produces an output speech quality that may not match that of unit selection. Unit selection and parametric techniques may be performed individually or combined together and/or combined with other synthesis techniques to produce speech audio data output. - Parametric speech synthesis may be performed as follows. The
TTS component 414 may include an acoustic model, or other models, which may convert a symbolic linguistic representation into a synthetic acoustic waveform of the input text data based on audio signal manipulation. The acoustic model includes rules which may be used by theparametric synthesis engine 432 to assign specific audio waveform parameters to input phonetic units and/or prosodic annotations. The rules may be used to calculate a score representing a likelihood that a particular audio output parameter(s) (e.g., frequency, volume, etc.) corresponds to the portion of the input symbolic linguistic representation received from theTTSFE 416. - The
parametric synthesis engine 432 may use a number of techniques to match speech to be synthesized with input phonetic units and/or prosodic annotations. One common technique is using HMMs. HMIs may be used to determine probabilities that audio output should match textual input. HMMs may be used to translate from parameters from the linguistic and acoustic space to parameters to be used by a vocoder (i.e., the digital voice encoder) to artificially synthesize the desired speech. Using HMMs, a number of states are presented, in which the states together represent one or more potential acoustic parameters to be output to the vocoder and each state is associated with a model, such as a Gaussian mixture model. Transitions between states may also have an associated probability, representing a likelihood that a current state may be reached from a previous state. Sounds to be output may be represented as paths between states of the HMI and multiple paths may represent multiple possible audio matches for the same input text data. Each portion of text data may be represented by multiple potential states corresponding to different known pronunciations of phonemes and their parts (e.g., phoneme identity, stress, accent, position, etc.). An initial determination of a probability of a potential phoneme may be associated with one state. As new text data is processed by thespeech synthesis engine 418, the state may change or stay the same, based on processing of the new text data. For example, the pronunciation of a previously processed word might change based on later processed words. A Viterbi algorithm may be used to find the most likely sequence of states based on the processed text data. The HMMs may generate speech in parametrized form including parameters such as fundamental frequency (f0), noise envelope, spectral envelope, etc. that are translated by a vocoder into audio segments. The output parameters may be configured for particular vocoders such as a STRAIGHT vocoder, TANDEM-STRAIGHT vocoder, harmonic plus noise (HNM) based vocoders, code-excited linear prediction (CELP) vocoders, GlottHMM vocoders, harmonic/stochastic model (HSM) vocoders, or others. - For example, to create the customized speech output of the
system 100, thesystem 100 may be configured with multiple voice inventories 478 (stored in the TTS voice unit storage 472), where each unit database is configured with a different “voice.” Such voice inventories may also be linked to user accounts, discussed below. For example, one voice corpus may be stored to be used to synthesize whispered speech (or speech approximating whispered speech), another may be stored to be used to synthesize excited speech (or speech approximating excited speech), and so on. To create the different voice corpuses a multitude of TTS training utterances may be spoken by an individual and recorded by thesystem 100. The TTS training utterances used to train a TTS voice corpus may be different from the training utterances used to train an ASR system. The audio associated with the TTS training utterances may then be split into small audio segments and stored as part of a voice corpus. The individual speaking the TTS training utterances may speak in different voice qualities to create the customized voice corpuses, for example the individual may whisper the training utterances, say them in an excited voice, and so on. Thus, the audio data of each customized voice corpus may match a desired speech quality. The customized voice inventory 478 may then be used during runtime to perform unit selection to synthesize speech. - As an alternative to customized voice corpuses or customized parametric “voices,” one or more filters may be used to alter traditional TTS output to match a desired speech quality (e.g., whisper, shout, etc.). For example, the
TTS component 414 may synthesize speech as normal, but thesystem 100, either as part of theTTS component 414 or otherwise, may apply a filter to make the synthesized speech take on the desired speech quality. In this manner a traditional TTS output may be altered to take on the desired speech quality. - During runtime the
TTS component 414 may receive text data for speech synthesis along with an indicator for a desired speech quality of the output speech. TheTTS component 414 may then select a voice matching the speech quality, either for unit selection or parametric synthesis, and synthesize speech using the received text data and speech quality indicator. -
FIG. 5 illustrates a user profile storage 502 that includes data regarding users of a device. The user profile storage 502 may be located proximate to the server(s) 120, or may otherwise be in communication with various components of thesystem 100, for example over thenetwork 199. The user profile storage 502 may include a variety of information related to individual users, accounts, etc. that interact with thesystem 100. In an example, the user profile storage 502 is a cloud-based storage. For illustration, as shown inFIG. 5 , the user profile storage 502 may include data regarding multiple users of a single speech-controlled device 110 (or other device). Each user indicated in a user profile associated with a speech-controlleddevice 110 may be associated with training data corresponding to training spoken utterances of the respective user, such astraining data 805 discussed below. In addition or alternatively, each user indicated in the user profile associated with a speech-controlleddevice 110 may be associated with feature/vector data corresponding to training data of the respective user. Further, each user may have a user ID that identifies the specific user for further system processing. -
FIG. 6 illustrates processing performed to prepare audio data for ASR and user recognition. The speech-controlleddevice 110 sendsaudio data 111 through a network(s) 199 to the server(s) 120 for processing. The server(s) 120 may include an acoustic front end (AFE) 256 (or other component(s)) that performs various functions on theincoming audio data 111 to prepare theincoming audio data 111 for further downstream processing, such as ASR and/or user recognition. For example, theAFE 256 may perform (602) windowing functions on theaudio data 111 to create framed audio data 603 (e.g., waveforms). The size of each frame may depend upon implementation. In an example, each frame may include twenty-five (25) milliseconds (m/s) of audio data, with an overlap of the next frame of 10 ms of data, thus resulting in sliding window processing of audio data. Performing a windowing function may include multiplying a time record by a finite-length window with an amplitude that varies smoothly and gradually toward zero at its edges. By performing such, the endpoints of the waveforms of respective frames of audio data meet, resulting in a continuous waveform without sharp transitions. TheAFE 256 may then perform (604) a fast Fourier transform (FFT) that converts the waveforms in each frame of the framedaudio data 603 from its original domain (e.g., time) to a representation in a frequency domain (thereby creating frequency domain framed audio data 605). Audio processing techniques other than or in addition to FFT may be used to transform audio data (e.g., waveforms) into data that can be processed as needed. - The server(s) 120 (through the
AFE 256 or using another component) then detects (610) whether voice activity (i.e., speech) is present in the post-FFT waveforms (i.e., frequency domain framed audio data 605). In doing so the server(s) 120 may perform VAD operations discussed above. The VAD detector 610 (or other components) may also be configured in a different order, for example theVAD detector 610 may operate on inputaudio data 111 rather than on frequency domain framedaudio data 605, may operate on ASR features, etc. Various different configurations of components are possible. If there is no speech in the audio data, the server(s) 120 discards the frequency domain framed audio data 605 (i.e., removes the audio data from the processing stream). If, instead, the server(s) 120 detects speech in the frequency domain framedaudio data 605, the server(s) 120, performs user recognition feature extraction (608) on the frequency domain framedaudio data 605. User recognition feature extraction (608) may include performing frame level feature extraction and/or utterance level feature extraction. The frame level feature extraction may determine which frame of a universal background model (UBM) the frame corresponds to. The UBM may be a Gaussian mixture model, a deep neural network, etc. The utterance level feature extraction may analyze aligned speech frames to derive feature vectors of fixed length (i.e., the user recognition feature/vector data 609). The feature extraction may continue until voice activity is no longer detected in the input audio data, at which point the server(s) 120 may determine that an endpoint of the speech has been reached processing with respect thereto. ASR feature extraction (606) may be performed on all theaudio data 111 received from the speech-controlleddevice 110. Alternatively (not illustrated), ASR feature extraction (606) may only be performed on audio data including speech (as indicated by the VAD 610). ASR feature extraction (606) and user recognition feature extraction (608) involve determining values (i.e., features) representing qualities of the frequency domain framedaudio data 605, along with quantitating those features into values (i.e., feature vectors or audio feature vectors). ASR feature extraction (606) may determine ASR feature/vector data 607 useful for ASR processing, and user recognition feature extraction (608) may determine user recognition feature/vector data 609 useful for user recognition. The ASR feature/vector data 607 and the user recognition feature/vector data 609 may be the same features/vectors, different features/vectors, or may include some overlapping features/vectors. A number of approaches may be used to extract features/vectors from the frequency domain framedaudio data 605, such as MFCCs, PLP techniques, neural network feature vector techniques, linear discriminant analysis, semi-tied covariance matrices, or other approaches known to those skilled in the art. - Typically, the ASR feature/
vector data 607 may include a different audio feature vector for each audio frame. Thus, for one 25 ms long audio frame, the ASRfeature extraction component 606 may output a single ASR feature vector. TheASR feature vectors 607 output by the ASRfeature extraction component 606 may be output to anASR component 250 to perform speech recognition. - Depending on system configuration, the user recognition
feature extraction component 608 may output multiple user recognition feature vectors, for example one such vector for each audio frame. Alternatively, the user recognitionfeature extraction component 608 may continue to input the frequency domain framedaudio data 605 while the utterance is ongoing (e.g., before the endpoint of the speech is reached in the audio data 605). While theaudio data 605 for the utterance is input, the user recognitionfeature extraction component 608 may accumulate or otherwise combine theaudio data 605 as it comes in. That is, for a certain frame's worth ofaudio data 605 that comes in, the user recognitionfeature extraction component 608 may combine that frame's worth of data to the previous data received for the particular utterance. The particular method of accumulation may vary, including using an arithmetic component, a recurrent neural network (RNN), trained model, or other component capable of combining audio data. Further, the form of combination performed by the user recognitionfeature extraction component 608 may depend on what audio qualities are determined to be important for ultimate user recognition. Thus, the user recognitionfeature extraction component 608 may be trained to isolate and process data that is most useful for user recognition. The output of the user recognitionfeature extraction component 608 may thus include user recognition feature/vector data 609 that includes values for features useful for user recognition. The resulting user recognition feature/vector data 609 may then be used for user recognition. - The user recognition feature/
vector data 609 may include multiple vectors each corresponding to different portions of the input utterance. Alternatively, the user recognition feature/vector data 609 may be a single vector representing audio qualities of the input utterance. Referring toFIG. 7 , the single vector may be created using anencoder 750 which can create a fixed-size vector to represent certain characteristics of the audio data entities as described below. In mathematical notation, given a sequence of feature data values x1, . . . xn, . . . xN, with xn being a D-dimensional vector, an encoder E(x1, . . . xN)=y projects the feature sequence to y, with y being a F-dimensional vector. F is a fixed length of the vector and is configurable depending on use of the encoded vector and other system configurations. As shown inFIG. 7 , feature values 702 through 706 (which may include feature vectors ofaudio data 111, frequency domain framedaudio data 605, or the like) may be input into anencoder 750 which will output an encodedfeature vector 710 that represents the input feature values. TheVAD 610 may be an input into theencoder 750 such that theencoder 750 may only operate when feature values input therein correspond to speech. The individual feature values (e.g., 702-706) may correspond to specific audio frames. Regardless of how many feature values are input, anyparticular encoder 750 will be configured to output vectors of the same size, thus ensuring a continuity of output encoded vector size from any particular encoder 750 (though different encoders may output vectors of different fixed sizes) and enabling comparing different feature vectors y. The value y may be called an embedding of the sequence x1, . . . xN. The length of xn and y are fixed and known a-priori, but the length of N of feature sequence x1, . . . xN is not necessarily known a-priori. The encoder E may be implemented as a neural network (NN), recurrent neural network (RNN), GMM, or other model. One particular example is a long short-term memory (LSTM) RNN. There are a variety of ways for theencoder 750 to consume the encoder input, including but not limited to: -
- linear, one direction (forward or backward),
- bi-linear, essentially the concatenation of a forward and a backward embedding, or
- tree, based on parse-tree of the sequence.
In addition, an attention model can be used, which is another RNN or deep neural network (DNN) that learns to “attract” attention to certain parts of the input. The attention model can be used in combination with the above methods of consuming the input.
-
FIG. 7 illustrates operation of theencoder 750. The input feature value sequence, starting with feature value x1 702, continuing through feature value xn 704 and concluding with feature value xN 706 is input into theencoder 750. Theencoder 750 may process the input feature values as noted above. Theencoder 750 outputs the encodedfeature vector y 710, which is a fixed length feature vector of length F. Thus, the user recognitionfeature extraction component 608 may include anencoder 750 that receives audio feature values for a particular utterance as input, and outputs a fixed length encodedfeature vector y 710, which may be the user recognition feature/vector data 609. Thus, in certain system configurations, no matter how long the utterance is, or how many acoustic frames worth of feature values are input into theencoder 750, theoutput feature vector 710/609 will be of the same length, thus allowing for more ease of performing user recognition by theuser recognition component 802. To allow for robust system operation afinal vector 609 may include many dimensions (e.g., several hundred), thus providing many datapoints for downstream consideration. - To determine the user recognition feature/
vector data 609, the system may (for example using VAD detector 610) determine that voice activity is detected in input audio. This may indicate the beginning of the utterance, thus resulting in the system determining that the input utterance starts at a first point in audio data. Audio processing (for example bywindowing 602,FFT 604,ASR feature extraction 606, userrecognition feature extraction 608,ASR component 250, or the like) may continue on the utterance audio data starting at the first point and continuing until theVAD detector 610 determines that voice activity is no detected at a second point in audio data. Thus, the system may determine that the input utterance ends at the second point. Thus, the first point may be considered the beginpoint of the utterance and the second point may be considered the endpoint of the utterance. TheVAD detector 610, or other component, may signal the user recognitionfeature extraction component 608 when the beginpoint and/or endpoint is detected so that the user recognitionfeature extraction component 608 may begin processing audio data starting at the beginpoint and ending at the endpoint. Further, audio frames during the utterance that do not include speech may be filtered out by theVAD detector 610 and thus not considered by theASR feature extraction 606 and/or userrecognition feature extraction 608. The resulting accumulated/processed speech audio data for the utterance (from beginpoint to endpoint) may then be represented in a single feature vector for the user verification feature/vector data 609, which may then be used for user recognition. -
FIG. 8 illustrates user recognition as performed by the server(s) 120. TheASR component 250 performs ASR on the ASR feature/vector data 607 as described above. ASR output (i.e., text data 300) is then processed by theNLU component 260 as described above. TheASR confidence data 807 is then passed to auser recognition component 802. - The
user recognition component 802 of the server(s) 120 performs user recognition using various data including the user recognition feature/vector data 609,training data 805 corresponding to sample audio data corresponding to known users, theASR confidence data 807 andsecondary data 809. Theuser recognition component 802 may then output user recognition confidence data 811 which reflects a certain confidence that the input utterance was spoken by one or more particular users. The user recognition confidence data 811 may include an indicator of the verified user (such as a user ID corresponding to the speaker of the utterance) along with a confidence value corresponding to the user ID, such as a numeric value or binned value as discussed below. - The
training data 805 may be stored in a userrecognition data storage 804. The userrecognition data storage 804 may be stored by the server(s) 120, or may be a separate device. Further, the userrecognition data storage 804 may be part of user profile storage 502. The userrecognition data storage 804 may be a cloud-based storage. Thetraining data 805 stored in the userrecognition data storage 804 may be stored as waveforms and/or corresponding features/vectors. Thetraining data 805 may correspond to data from various audio samples, each audio sample associated with a known user and/or user identity. For example, each user known to the system may be associated with some set oftraining data 805 for the known user. Theuser recognition component 802 may then use thetraining data 805 to compare against incoming audio data (represented by user recognition feature/vector data 609) to determine the identity of a user speaking an utterance. Thetraining data 805 stored in the userrecognition data storage 804 may thus be associated with multiple users of multiple devices. Thus, thetraining data 805 stored in thestorage 804 may be associated with both a user that spoke the respective utterance, as well as the speech-controlleddevice 110 that captured the respective utterance. - The
training data 805 for a particular user may include a feature vector of the same size as a vector of the user recognition feature/vector data 609. Thus, for example, if afeature vector 609 is of size F (for example encoded by encoder 750), thetraining data 805 may also be a feature vector of size F. To create such a training data feature vector, during a training period the system may either prompt a user to speak sample audio data or may identify sample audio data known to have been spoken by a particular user. The system may then process the sample audio data (for example into feature values such as 702-706 and then by encoder 750) to create sample training data 805 (e.g., a feature vector of size F). Thetraining data 805 may then be stored by the system (such as in data storage 804) and saved for use during runtime user recognition processing. - To perform user recognition, the
user recognition component 802 may determine the speech-controlleddevice 110 from which theaudio data 111 originated. For example, theaudio data 111 may include a tag indicating the speech-controlleddevice 110. Either the speech-controlleddevice 110 or the server(s) 120 may tag theaudio data 111 as such. The tag indicating the speech-controlleddevice 110 may be associated with the user recognition feature/vector data 609 produced from theaudio data 111. Theuser recognition component 802 may send a signal to the userrecognition data storage 804, with the signal requesting only trainingdata 805 associated with known users of the speech-controlleddevice 110 from which theaudio data 111 originated. This may include accessing a user profile associated with the speech-controlleddevice 110 and then only inputtingtraining data 805 associated with users corresponding to the user profile of thedevice 110. This limits the universe of possible training data therecognition component 802 should consider at runtime when recognizing a user and thus decreases the amount of time to perform user recognition by decreasing the amount oftraining data 805 needed to be processed. Alternatively, theuser recognition component 802 may access all (or some other subset of)training data 805 available to the system. However, accessing alltraining data 805 will likely increase the amount of time needed to perform user recognition based on the magnitude of training data to be processed. - If the
user recognition component 802 receivestraining data 805 as an audio waveform, theuser recognition component 802 may determine features/vectors of the waveform(s) or otherwise convert the waveform into a data format that can be used by theuser recognition component 802 to actually perform the user recognition. Theuser recognition component 802 may then identify the user that spoke the utterance in theaudio data 111 by comparing features/vectors of the user recognition feature/vector data 609 to training features/vectors (either received from thestorage 804 or determined fromtraining data 805 received from the storage 804). - The
user recognition component 802 may include ascoring component 822 which determines respective scores indicating whether the input utterance (represented by user verification feature/vector data 609) was spoken by particular users (represented by training data 805). Theuser recognition component 802 may also include aconfidence component 824 which determines an overall confidence as the accuracy of the user recognition operations (such as those of the scoring component 822) and/or an individual confidence for each user potentially identified by thescoring component 822. The output from thescoring component 822 may include scores for all users with respect to which user recognition was performed (e.g., all users associated with the speech-controlled device 110). For example, the output may include a first score for a first user, a second score for a second user, and third score for a third user, etc. Although illustrated as two separate components, thescoring component 822 andconfidence component 824 may be combined into a single component or may be separated into more than two components. - The
scoring component 822 andconfidence component 824 may implement one or more trained machine learning models (such neural networks, classifiers, etc.) as known in the art. For example, thescoring component 822 may use probabilistic linear discriminant analysis (PLDA) techniques. PLDA scoring determines how likely it is that the input userrecognition feature vector 609 corresponds to a particular training data featurevector 805 for a particular user. The PLDA scoring may generate similarity scores for each training feature vector considered and may output the list of scores and users and/or the user ID of the speaker whose training data feature vector most closely corresponds to the input userrecognition feature vector 609 of the utterance. Thescoring component 822 may also use other techniques such as GMMs, generative Bayesian models, or the like, to determine similarity scores. - The
confidence component 824 may input various data including information about theASR confidence 807, utterance length (e.g., number of frames or time of the utterance), audio condition/quality data (such as signal-to-interference data or other metric data), fingerprint data, image data, or other factors to consider how confident theuser recognition component 802 is with regard to the scores linking users to the input utterance. Theconfidence component 824 may also consider the similarity scores and user IDs output by thescoring component 822. Thus, theconfidence component 824 may determine that alower ASR confidence 807, or poor input audio quality, or other factors, may result in a lower confidence of theuser recognition component 802. Whereas ahigher ASR confidence 807, or better input audio quality, or other factors, may result in a higher confidence of theuser recognition component 802. Precise determination of the confidence may depend on configuration and training of theconfidence component 824 and the models used therein. Theconfidence component 824 may operate using a number of different machine learning models/techniques such as GMM, neural networks, etc. For example, theconfidence component 824 may be a classifier configured to map a score output by thescoring component 822 to a confidence. - The
user recognition component 802 may output user recognition confidence data 811 specific to a single user, or multiple users in the form of an N-best list. For example, theuser recognition component 802 may output user recognition confidence data 811 with respect to each user indicated in the profile associated with the speech-controlleddevice 110 from which theaudio data 111 was received. The user recognition confidence data 811 may include particular scores (e.g., 0.0-1.0, 0-1000, or whatever scale the system is configured to operate). Thus the system may output an N-best list of potential users with confidence scores (e.g., John—0.2, Jane—0.8). Alternatively or in addition, the user recognition confidence data 811 may include binned recognition indicators. For example, a computed recognition score of a first range (e.g., 0.0-0.33) may be output as “low,” a computed recognition score of a second range (e.g., 0.34-0.66) may be output as “medium,” and a computed recognition score of a third range (e.g., 0.67-1.0) may be output as “high.” Thus, the system may output an N-best list of potential users with binned scores (e.g., John—low, Jane—high). Combined binned and confidence score outputs are also possible. Rather than a list of users and their respective scores and/or bins, the user recognition confidence data 811 may only include information related to the top scoring user as determined by theuser recognition component 802. The scores and bins may be based on information determined by theconfidence component 824. Theuser recognition component 802 may also output a confidence value that the scores/bins are correct, where the confidence value indicates how confident theuser recognition component 802 is in the output results. This confidence value may be determined by theconfidence component 824. - The
confidence component 824 may determine individual user confidences and differences between user confidences when determining the user recognition confidence data 811. For example, if a difference between a first user's confidence score and a second user's confidence score is large, and the first user's confidence score is above a threshold, then theuser recognition component 802 is able to recognize the first user as the user that spoke the utterance with a much higher confidence than if the difference between the users' confidences were smaller. - The
user recognition component 802 may perform certain thresholding to avoid incorrect user recognition results being output. For example, theuser recognition component 802 may compare a confidence score output by theconfidence component 824 to a confidence threshold. If the confidence score is not above the confidence threshold (for example, a confidence of “medium” or higher), theuser recognition component 802 may not output user recognition confidence data 811, or may only include in that data 811 an indication that a user speaking the utterance could not be verified. Further, theuser recognition component 802 may not output user recognition confidence data 811 until enough user recognition feature/vector data 609 is accumulated and processed to verify the user above a threshold confidence. Thus theuser recognition component 802 may wait until a sufficient threshold quantity of audio data of the utterance has been processed before outputting user recognition confidence data 811. The quantity of received audio data may also be considered by theconfidence component 824. - The
user recognition component 802 may be defaulted to output binned (e.g., low, medium, high) user recognition confidence data 811. However, such may be problematic from the command processor(s) 290 perspective. For example, if theuser recognition component 802 computes a single binned confidence for multiple users, the command processor(s) 290 may not be able to determine which user to determine content with respect to. In this situation, theuser recognition component 802 may be configured to override its default setting and output user recognition confidence data 811 including values (e.g., 0.0-1.0) associated with the users associated with the same binned confidence. This enables the command processor(s) 290 to select content associated with the user associated with the highest confidence value. The user recognition confidence data 811 may also include the user IDs corresponding to the potential user(s) who spoke the utterance. - In addition, the
user recognition component 802 may usesecondary data 809 to inform user recognition processing. Thus, a trained model or other component of theuser recognition component 802 may be trained to takesecondary data 809 as an input feature when performing recognition.Secondary data 809 may include a wide variety of data types depending on system configuration and may be made available from other sensors, devices, or storage such as user profile data 502, etc. Thesecond data 809 may include a time of day at which the audio data was captured, a day of a week in which the audio data was captured, thetext data 300, and/or the NLU results 385. - In one example,
secondary data 809 may include image data or video data. For example, facial recognition may be performed on image data or video data received from the speech-controlleddevice 110 from which theaudio data 111 was received. Facial recognition may be performed by theuser recognition component 802, or another component of the server(s) 120. The output of the facial recognition process may be used by theuser recognition component 802. That is, facial recognition output data may be used in conjunction with the comparison of the features/vectors of theaudio data 111 andtraining data 805 to perform more accurate user recognition. - The
secondary data 809 may also include location data of the speech-controlleddevice 110. The location data may be specific to a building within which the speech-controlleddevice 110 is located. For example, if the speech-controlleddevice 110 is located in user A's bedroom, such location may increase user recognition confidence data associated with user A, but decrease user recognition confidence data associated with user B. - The
secondary data 809 may further include type data indicating a type of the speech-controlleddevice 110. Different types of speech-controlled devices may include, for example, a smart watch, a smart phone, a tablet computer, and a vehicle. The type of the speech-controlleddevice 110 may be indicated in a profile associated with the speech-controlleddevice 110. For example, if the speech-controlleddevice 110 from which theaudio data 111 was received is a smart watch or vehicle belonging to user A, the fact that the speech-controlleddevice 110 belongs to user A may increase user recognition confidence data associated with user A, but decrease user recognition confidence data associated with user B. - The
secondary data 809 may additionally include geographic coordinate data associated with the speech-controlleddevice 110. For example, a profile associated with a vehicle may indicate multiple users (e.g., user A and user B). The vehicle may include a global positioning system (GPS) indicating latitude and longitude coordinates of the vehicle when theaudio data 111 is captured by the vehicle. As such, if the vehicle is located at a coordinate corresponding to a work location/building of user A, such may increase user recognition confidence data associated with user A, but decrease user recognition confidence data of all other users indicated in the profile associated with the vehicle. Global coordinates and associated locations (e.g., work, home, etc.) may be indicated in a user profile associated with the speech-controlleddevice 110. The global coordinates and associated locations may be associated with respective users in the user profile. - The
secondary data 809 may also include other data/signals about activity of a particular user that may be useful in performing user recognition of an input utterance. For example, if a user has recently entered a code to disable a home security alarm, and the utterance corresponds to a device at the home, signals from the home security alarm about the disabling user, time of disabling, etc. may be reflected in thesecondary data 809 and considered by theuser recognition component 802. If a mobile device (such as a phone, Tile, dongle, or other device) known to be associated with a particular user is detected proximate to (for example physically close to, connected to the same WiFi network as, or otherwise nearby) the speech-controlleddevice 110, this may be reflected in thesecondary data 809 and considered by theuser recognition component 802. - Depending on system configuration, the
secondary data 809 may be configured to be included in the vector representation of the user recognition feature/vector data 609 (for example using encoder 750) so that all the data relating to the utterance to be processed by thescoring component 822 may be included in a single vector. Alternatively, thesecondary data 809 may be reflected in one or more different data structures to be processed by thescoring component 822. - As shown in
FIG. 8 , theASR component 250 mayoutput text data 300, which in turn is processed by theNLU component 260. Theresults 385 of NLU processing performed by theNLU component 260 and the user recognition confidence data 811 output by theuser recognition component 802 may be sent to one or more applications, represented by command processor(s) 290. The command processor(s) 290 that receives the NLU results 385 and the user recognition confidence score data 811 may be determined by the server(s) 120 as corresponding to content responsive to the utterance in theaudio data 111. For example, if theaudio data 111 includes the utterance “Play my music,” the NLU results 385 and user recognition confidence data 811 may be sent to a music playingcommand processor 290. If user recognition confidence data 811 associated with a single user is passed to the command processor(s) 290, the command processor(s) 290 may identify content responsive to the utterance and associated with the user. If, instead, user recognition confidence score data 811 associated with multiple users is passed to the command processor(s) 290, the command processor(s) 290 may identify content responsive to the utterance and associated with the user associated with the highest recognition confidence. - Various machine learning techniques may be used to train and operate models to perform various steps described above, such as user recognition feature extraction, encoding, user recognition scoring, user recognition confidence determination, etc. Models may be trained and operated according to various machine learning techniques. Such techniques may include, for example, neural networks (such as deep neural networks and/or recurrent neural networks), inference engines, trained classifiers, etc. Examples of trained classifiers include Support Vector Machines (SVMs), neural networks, decision trees, AdaBoost (short for “Adaptive Boosting”) combined with decision trees, and random forests. Focusing on SVM as an example, SVM is a supervised learning model with associated learning algorithms that analyze data and recognize patterns in the data, and which are commonly used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. More complex SVM models may be built with the training set identifying more than two categories, with the SVM determining which category is most similar to input data. An SVM model may be mapped so that the examples of the separate categories are divided by clear gaps. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gaps they fall on. Classifiers may issue a “score” indicating which category the data most closely matches. The score may provide an indication of how closely the data matches the category.
- In order to apply the machine learning techniques, the machine learning processes themselves need to be trained. Training a machine learning component such as, in this case, one of the first or second models, requires establishing a “ground truth” for the training examples. In machine learning, the term “ground truth” refers to the accuracy of a training set's classification for supervised learning techniques. Various techniques may be used to train the models including backpropagation, statistical learning, supervised learning, semi-supervised learning, stochastic learning, or other known techniques.
-
FIG. 9 is a block diagram conceptually illustrating a user device 110 (e.g., the speech-controlleddevice 110 described herein) that may be used with the described system.FIG. 10 is a block diagram conceptually illustrating example components of a remote device, such as theserver 120 that may assist with ASR processing, NLU processing, or command processing.Multiple servers 120 may be included in thesystem 100, such as oneserver 120 for performing ASR, oneserver 120 for performing NLU, etc. In operation, each of these devices (or groups of devices) may include computer-readable and computer-executable instructions that reside on the respective device (110/120), as will be discussed further below. - Each of these devices (110/120) may include one or more controllers/processors (904/1004), that may each include a central processing unit (CPU) for processing data and computer-readable instructions, and a memory (906/1006) for storing data and instructions of the respective device. The memories (906/1006) may individually include volatile random access memory (RAM), non-volatile read only memory (ROM), non-volatile magnetoresistive (MRAM) and/or other types of memory. Each device (110/120) may also include a data storage component (908/1008), for storing data and controller/processor-executable instructions. Each data storage component may individually include one or more non-volatile storage types such as magnetic storage, optical storage, solid-state storage, etc. Each device (110/120) may also be connected to removable or external non-volatile memory and/or storage (such as a removable memory card, memory key drive, networked storage, etc.) through respective input/output device interfaces (902/1002).
- Computer instructions for operating each device (110/120) and its various components may be executed by the respective device's controller(s)/processor(s) (904/1004), using the memory (906/1006) as temporary “working” storage at runtime. A device's computer instructions may be stored in a non-transitory manner in non-volatile memory (906/1006), storage (908/1008), or an external device(s). Alternatively, some or all of the executable instructions may be embedded in hardware or firmware on the respective device in addition to or instead of software.
- Each device (110/120) includes input/output device interfaces (902/1002). A variety of components may be connected through the input/output device interfaces (902/1002), as will be discussed further below. Additionally, each device (110/120) may include an address/data bus (924/1024) for conveying data among components of the respective device. Each component within a device (110/120) may also be directly connected to other components in addition to (or instead of) being connected to other components across the bus (924/1024).
- Referring to
FIG. 9 , thedevice 110 may include input/output device interfaces 902 that connect to a variety of components such as an audio output component such as aspeaker 101, a wired headset or a wireless headset (not illustrated), or other component capable of outputting audio. Thedevice 110 may also include an audio capture component. The audio capture component may be, for example, amicrophone 103 or array of microphones, a wired headset or a wireless headset (not illustrated), etc. Themicrophone 103 may be configured to capture audio. If an array of microphones is included, approximate distance to a sound's point of origin may be determined by acoustic localization based on time and amplitude differences between sounds captured by different microphones of the array. Thedevice 110 may additionally include an image or video capture component, such as thecamera 115. Thecamera 115 may be configured to capture data used to perform facial recognition, and ultimately user recognition. - For example, via antenna(s) 914, the input/output device interfaces 902 may connect to one or
more networks 199 via a wireless local area network (WLAN) (such as WiFi) radio, Bluetooth, and/or wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, etc. A wired connection such as Ethernet may also be supported. Through the network(s) 199, the speech processing system may be distributed across a networked environment. - Referring to
FIG. 10 , theserver 120 may include auser recognition component 802 configured to perform user recognition as described herein. Theserver 120 may also be configured with theTTS component 414,AFE 256, or other components as described herein. - The
device 110 and/or theserver 120 may include anASR component 250. TheASR component 250 in thedevice 110 may be of limited or extended capabilities. TheASR component 250 may include the language models 254 stored in ASRmodel storage component 252. If limited speech recognition is included, theASR component 250 may be configured to identify a limited number of words, whereas extended speech recognition may be configured to recognize a much larger range of words. - The
device 110 and/or theserver 120 may include a limited orextended NLU component 260. TheNLU component 260 in thedevice 110 may be of limited or extended capabilities. TheNLU component 260 may comprise the nameentity recognition component 262, theintent classification component 264, and/or other components. TheNLU component 260 may also include a stored knowledge base and/or entity library, or those storages may be separately located. - The
device 110 and/or theserver 120 may also include thecommand processor 290 configured to execute commands/functions associated with a spoken utterance as described herein. - As noted above, multiple devices may be employed in a single speech processing system. In such a multi-device system, each of the devices may include different components for performing different aspects of the speech processing. The multiple devices may include overlapping components. The components of the
device 110 and theserver 120, as illustrated inFIGS. 9 and 10 , are exemplary, and may be located as a stand-alone device or may be included, in whole or in part, as a component of a larger device or system. - As illustrated in
FIG. 11 , multiple devices (120, 110 a-110 e) may contain components of thesystem 100 and the devices may be connected over anetwork 199. Thenetwork 199 may include a local or private network or may include a wide network such as the Internet. Devices may be connected to thenetwork 199 through either wired or wireless connections. For example, the speech-controlleddevice 110 a, asmart phone 110 b, asmart watch 110 c, atablet computer 110 d, and/or avehicle 110 e may be connected to thenetwork 199 through a wireless service provider, over a WiFi or cellular network connection, or the like. Other devices are included as network-connected support devices, such as theserver 120 or others. The support devices may connect to thenetwork 199 through a wired connection or wireless connection. Networked devices may capture audio using one-or-more built-in orconnected microphones 103 or audio capture devices, with processing performed by ASR, NLU, or other components of the same device or another device connected via thenetwork 199, such as theASR component 250, theNLU component 260, etc. of one ormore servers 120. - The concepts disclosed herein may be applied within a number of different devices and computer systems, including, for example, general-purpose computing systems, speech processing systems, object recognition (e.g., facial recognition) systems, and distributed computing environments.
- The above aspects of the present disclosure are meant to be illustrative. They were chosen to explain the principles and application of the disclosure and are not intended to be exhaustive or to limit the disclosure. Many modifications and variations of the disclosed aspects may be apparent to those of skill in the art. Persons having ordinary skill in the field of computers, speech processing, object recognition (e.g., facial recognition), and user recognition should recognize that components and process steps described herein may be interchangeable with other components or steps, or combinations of components or steps, and still achieve the benefits and advantages of the present disclosure. Moreover, it should be apparent to one skilled in the art that the disclosure may be practiced without some or all of the specific details and steps disclosed herein.
- Aspects of the disclosed system may be implemented as a computer method or as an article of manufacture such as a memory device or non-transitory computer readable storage medium. The computer readable storage medium may be readable by a computer and may comprise instructions for causing a computer or other device to perform processes described in the present disclosure. The computer readable storage medium may be implemented by a volatile computer memory, non-volatile computer memory, hard drive, solid-state memory, flash drive, removable disk, and/or other media. In addition, components of one or more of the components and engines may be implemented as in firmware or hardware, such as the
AFE 256, which comprises, among other things, analog and/or digital filters (e.g., filters configured as firmware to a digital signal processor (DSP)). - As used in this disclosure, the term “a” or “one” may include one or more items unless specifically stated otherwise. Further, the phrase “based on” is intended to mean “based at least in part on” unless specifically stated otherwise.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/584,489 US20220189458A1 (en) | 2016-12-22 | 2022-01-26 | Speech based user recognition |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/388,458 US10522134B1 (en) | 2016-12-22 | 2016-12-22 | Speech based user recognition |
US16/726,051 US11270685B2 (en) | 2016-12-22 | 2019-12-23 | Speech based user recognition |
US17/584,489 US20220189458A1 (en) | 2016-12-22 | 2022-01-26 | Speech based user recognition |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/726,051 Continuation US11270685B2 (en) | 2016-12-22 | 2019-12-23 | Speech based user recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220189458A1 true US20220189458A1 (en) | 2022-06-16 |
Family
ID=69057435
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/388,458 Active US10522134B1 (en) | 2016-12-22 | 2016-12-22 | Speech based user recognition |
US16/726,051 Active 2037-08-30 US11270685B2 (en) | 2016-12-22 | 2019-12-23 | Speech based user recognition |
US17/584,489 Abandoned US20220189458A1 (en) | 2016-12-22 | 2022-01-26 | Speech based user recognition |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/388,458 Active US10522134B1 (en) | 2016-12-22 | 2016-12-22 | Speech based user recognition |
US16/726,051 Active 2037-08-30 US11270685B2 (en) | 2016-12-22 | 2019-12-23 | Speech based user recognition |
Country Status (1)
Country | Link |
---|---|
US (3) | US10522134B1 (en) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10755294B1 (en) | 2015-04-28 | 2020-08-25 | Intuit Inc. | Method and system for increasing use of mobile devices to provide answer content in a question and answer based customer support system |
US10475044B1 (en) | 2015-07-29 | 2019-11-12 | Intuit Inc. | Method and system for question prioritization based on analysis of the question content and predicted asker engagement before answer content is generated |
US10552843B1 (en) | 2016-12-05 | 2020-02-04 | Intuit Inc. | Method and system for improving search results by recency boosting customer support content for a customer self-help system associated with one or more financial management systems |
US11100932B2 (en) * | 2017-02-10 | 2021-08-24 | Synaptics Incorporated | Robust start-end point detection algorithm using neural network |
US11853884B2 (en) | 2017-02-10 | 2023-12-26 | Synaptics Incorporated | Many or one detection classification systems and methods |
US20190018692A1 (en) * | 2017-07-14 | 2019-01-17 | Intuit Inc. | System and method for identifying and providing personalized self-help content with artificial intelligence in a customer self-help system |
US10922367B2 (en) | 2017-07-14 | 2021-02-16 | Intuit Inc. | Method and system for providing real time search preview personalization in data management systems |
US10901811B2 (en) | 2017-07-31 | 2021-01-26 | Splunk Inc. | Creating alerts associated with a data storage system based on natural language requests |
US11494395B2 (en) | 2017-07-31 | 2022-11-08 | Splunk Inc. | Creating dashboards for viewing data in a data storage system based on natural language requests |
KR102421745B1 (en) * | 2017-08-22 | 2022-07-19 | 삼성전자주식회사 | System and device for generating TTS model |
US11093951B1 (en) | 2017-09-25 | 2021-08-17 | Intuit Inc. | System and method for responding to search queries using customer self-help systems associated with a plurality of data management systems |
US20190156834A1 (en) * | 2017-11-22 | 2019-05-23 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vehicle virtual assistance systems for taking notes during calls |
US11436642B1 (en) | 2018-01-29 | 2022-09-06 | Intuit Inc. | Method and system for generating real-time personalized advertisements in data management self-help systems |
US11269665B1 (en) | 2018-03-28 | 2022-03-08 | Intuit Inc. | Method and system for user experience personalization in data management systems using machine learning |
US11929155B1 (en) * | 2018-05-31 | 2024-03-12 | Allscripts Software, Llc | Apparatus, system and method for predictive processing of electronic health data for notification system |
EP3899807A1 (en) | 2019-01-23 | 2021-10-27 | Google LLC | Generating neural network outputs using insertion operations |
US20200335089A1 (en) * | 2019-04-16 | 2020-10-22 | International Business Machines Corporation | Protecting chat with artificial intelligence |
US11195522B1 (en) * | 2019-06-17 | 2021-12-07 | Amazon Technologies, Inc. | False invocation rejection for speech processing systems |
US11620978B2 (en) * | 2019-08-14 | 2023-04-04 | Electronics And Telecommunications Research Institute | Automatic interpretation apparatus and method |
US12020693B2 (en) * | 2020-04-29 | 2024-06-25 | Samsung Electronics Co., Ltd. | System and method for out-of-vocabulary phrase support in automatic speech recognition |
KR20220020723A (en) * | 2020-08-12 | 2022-02-21 | 삼성전자주식회사 | The device for recognizing the user's speech input and the method for operating the same |
CN112581981B (en) * | 2020-11-04 | 2023-11-03 | 北京百度网讯科技有限公司 | Man-machine interaction method, device, computer equipment and storage medium |
WO2022167937A1 (en) | 2021-02-05 | 2022-08-11 | Alcon Inc. | Voice-controlled surgical system |
US20220383573A1 (en) * | 2021-05-19 | 2022-12-01 | Disney Enterprises, Inc. | Frame interpolation for rendered content |
WO2024097684A1 (en) * | 2022-10-31 | 2024-05-10 | CDC Phone App IP 2023 LLC | Systems for providing real-time feedback to reduce undesired speaking patterns, and methods of using the same |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150198454A1 (en) * | 2014-01-14 | 2015-07-16 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US20160125879A1 (en) * | 2014-11-03 | 2016-05-05 | Microsoft Technology Licensing, Llc | Augmentation of key phrase user recognition |
US20170091433A1 (en) * | 2015-09-25 | 2017-03-30 | Intel Corporation | Techniques to determine distinctiveness of a biometric input in a biometric system |
US20170124853A1 (en) * | 2015-11-02 | 2017-05-04 | Rapidsos, Inc. | Method and system for situational awareness for emergency response |
US9922236B2 (en) * | 2014-09-17 | 2018-03-20 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009102885A1 (en) * | 2008-02-12 | 2009-08-20 | Phone Through, Inc. | Systems and methods for enabling interactivity among a plurality of devices |
US20110243449A1 (en) * | 2010-03-31 | 2011-10-06 | Nokia Corporation | Method and apparatus for object identification within a media file using device identification |
TWI456515B (en) * | 2012-07-13 | 2014-10-11 | Univ Nat Chiao Tung | Human identification system by fusion of face recognition and speaker recognition, method and service robot thereof |
US9147399B1 (en) | 2012-08-31 | 2015-09-29 | Amazon Technologies, Inc. | Identification using audio signatures and additional characteristics |
US8744645B1 (en) * | 2013-02-26 | 2014-06-03 | Honda Motor Co., Ltd. | System and method for incorporating gesture and voice recognition into a single system |
US9460715B2 (en) | 2013-03-04 | 2016-10-04 | Amazon Technologies, Inc. | Identification using audio signatures and additional characteristics |
US9711148B1 (en) * | 2013-07-18 | 2017-07-18 | Google Inc. | Dual model speaker identification |
US20150037765A1 (en) * | 2013-08-02 | 2015-02-05 | Speetra, Inc. | System and method for interactive electronic learning and assessment |
US9754093B2 (en) * | 2014-08-28 | 2017-09-05 | Ncr Corporation | Methods and a system for automated authentication confidence |
US20170109448A1 (en) * | 2015-10-18 | 2017-04-20 | James Joseph Adamy | System and method for enhanced user matching based on multiple data sources |
US9729821B1 (en) * | 2016-03-31 | 2017-08-08 | Amazon Technologies, Inc. | Sensor fusion for location based device grouping |
-
2016
- 2016-12-22 US US15/388,458 patent/US10522134B1/en active Active
-
2019
- 2019-12-23 US US16/726,051 patent/US11270685B2/en active Active
-
2022
- 2022-01-26 US US17/584,489 patent/US20220189458A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150198454A1 (en) * | 2014-01-14 | 2015-07-16 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9922236B2 (en) * | 2014-09-17 | 2018-03-20 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
US20160125879A1 (en) * | 2014-11-03 | 2016-05-05 | Microsoft Technology Licensing, Llc | Augmentation of key phrase user recognition |
US20170091433A1 (en) * | 2015-09-25 | 2017-03-30 | Intel Corporation | Techniques to determine distinctiveness of a biometric input in a biometric system |
US20170124853A1 (en) * | 2015-11-02 | 2017-05-04 | Rapidsos, Inc. | Method and system for situational awareness for emergency response |
Also Published As
Publication number | Publication date |
---|---|
US10522134B1 (en) | 2019-12-31 |
US20200193967A1 (en) | 2020-06-18 |
US11270685B2 (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11990127B2 (en) | User recognition for speech processing systems | |
US11270685B2 (en) | Speech based user recognition | |
US11854545B2 (en) | Privacy mode based on speaker identifier | |
US20230317074A1 (en) | Contextual voice user interface | |
US20230012984A1 (en) | Generation of automated message responses | |
US11580991B2 (en) | Speaker based anaphora resolution | |
US11062694B2 (en) | Text-to-speech processing with emphasized output audio | |
US10027662B1 (en) | Dynamic user authentication | |
US11798556B2 (en) | Configurable output data formats | |
US10276149B1 (en) | Dynamic text-to-speech output | |
US10176809B1 (en) | Customized compression and decompression of audio data | |
US11837225B1 (en) | Multi-portion spoken command framework | |
US10963497B1 (en) | Multi-stage query processing | |
US11715472B2 (en) | Speech-processing system | |
US20240071385A1 (en) | Speech-processing system | |
US11817090B1 (en) | Entity resolution using acoustic data | |
US11393451B1 (en) | Linked content in voice user interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AMAZON TECHNOLOGIES, INC., WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATSOUKAS, SPYRIDON;KHARE, APARNA;KRISHNAMOORTHY, VISHWANATHAN;AND OTHERS;SIGNING DATES FROM 20170929 TO 20171002;REEL/FRAME:058770/0033 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |