Hermansky et al., 1989 - Google Patents
The effective second formant F2'and the vocal tract front-cavityHermansky et al., 1989
- Document ID
- 1762280716174441383
- Author
- Hermansky H
- Broad D
- Publication year
- Publication venue
- International Conference on Acoustics, Speech, and Signal Processing,
External Links
Snippet
The authors advance the hypothesis that the equivalent perceptual second formant F2'carries information about the front cavity of the vocal tract. They note a previous result that peaks found by perceptually based linear predictive (PLP) analysis closely track the F2'. The …
- 230000001755 vocal 0 title abstract description 32
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/06—Elementary speech units used in speech synthesisers; Concatenation rules
- G10L13/07—Concatenation rules
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signal, using source filter models or psychoacoustic analysis
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Dutoit | An introduction to text-to-speech synthesis | |
Summerfield | Articulatory rate and perceptual constancy in phonetic perception. | |
Story | Phrase-level speech simulation with an airway modulation model of speech production | |
Rosner et al. | Vowel perception and production | |
Suni et al. | Hierarchical representation and estimation of prosody using continuous wavelet transform | |
Botinis et al. | Developments and paradigms in intonation research | |
Kuwabara et al. | Acoustic characteristics of speaker individuality: Control and conversion | |
Raux et al. | A unit selection approach to F0 modeling and its application to emphasis | |
Rao | Voice conversion by mapping the speaker-specific features using pitch synchronous approach | |
Schwab et al. | Pattern recognition by humans and machines: speech perception | |
Hermansky et al. | The effective second formant F2'and the vocal tract front-cavity | |
Reddy et al. | Two-stage intonation modeling using feedforward neural networks for syllable based text-to-speech synthesis | |
Ogden et al. | ProSynth: an integrated prosodic approach to device-independent, natural-sounding speech synthesis | |
Sanchez et al. | Hierarchical modeling of F0 contours for voice conversion. | |
Gussenhoven et al. | On the speaker-dependence of the perceived prominence of F0peaks | |
Hill et al. | Low-level articulatory synthesis: A working text-to-speech solution and a linguistic tool1 | |
Teixeira et al. | Simulation of human speech production applied to the study and synthesis of European Portuguese | |
Wagner | A comprehensive model of intonation for application in speech synthesis | |
Beller et al. | Speech rates in french expressive speech | |
Padmini et al. | Age-Based Automatic Voice Conversion Using Blood Relation for Voice Impaired. | |
Mertens et al. | Comparing approaches to pitch contour stylization for speech synthesis | |
Lobanov et al. | TTS-Synthesizer as a Computer Means for Personal Voice Cloning (On the example of Russian) | |
Carré et al. | Acoustic contrast and the origin of the human vowel space | |
Hermansky | Why is the formant frequency difference limen asymmetric? | |
Fakotakis | Corpus Design, Recording and Phonetic Analysis of Greek Emotional Database. |