[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Teixeira et al., 1997 - Google Patents

A software tool to study Portuguese vowels

Teixeira et al., 1997

View PDF
Document ID
17667696780928924330
Author
Teixeira A
Vaz F
Principe J
Publication year
Publication venue
Fifth European Conference on Speech Communication and Technology

External Links

Snippet

We are developing a software system to help the study of Portuguese Vowel Production. This tool is an articulatory synthesizer with a graphical user interface. The synthesizer is composed of a saggittal articulatory model derived from Mermelstein model and a frequency …
Continue reading at www.academia.edu (PDF) (other versions)

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signal, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification

Similar Documents

Publication Publication Date Title
Ancilin et al. Improved speech emotion recognition with Mel frequency magnitude coefficient
Eyben Real-time speech and music classification by large audio feature space extraction
CN105023573B (en) It is detected using speech syllable/vowel/phone boundary of auditory attention clue
Singh et al. Modulation spectral features for speech emotion recognition using deep neural networks
Alku et al. Closed phase covariance analysis based on constrained linear prediction for glottal inverse filtering
Childers et al. Detection of laryngeal function using speech and electroglottographic data
Holambe et al. Advances in non-linear modeling for speech processing
Ghitza Robustness against noise: The role of timing-synchrony measurement
Airaksinen et al. Data augmentation strategies for neural network F0 estimation
Seshadri et al. Augmented CycleGANs for continuous scale normal-to-Lombard speaking style conversion
Yadav et al. Prosodic mapping using neural networks for emotion conversion in Hindi language
Jeon et al. Speech analysis in a model of the central auditory system
Teixeira et al. A software tool to study Portuguese vowels
Luo et al. Emotional Voice Conversion Using Neural Networks with Different Temporal Scales of F0 based on Wavelet Transform.
Prasad et al. Backend tools for speech synthesis in speech processing
Teixeira et al. Tel.+ 351 34 370 500, FAX:+ 351 34 370 541, E-mail:(ajst, fvaz) 0inesca. pt José Carlos Principe
Al-Radhi et al. RNN-based speech synthesis using a continuous sinusoidal model
Saba et al. Urdu Text-to-Speech Conversion Using Deep Learning
Matsumoto et al. Speech-like emotional sound generation using wavenet
Kawahara STRAIGHT-TEMPO: A universal tool to manipulate linguistic and para-linguistic speech information
Xue et al. A study on applying target prediction model to parameterize power envelope of emotional speech
Zhang et al. Determination of the vocal tract model order in iterative adaptive inverse filtering
Kaur et al. Designing and creating Punjabi Speech Synthesis System Using Hidden Markov Model
Shahrebabaki et al. A two-stage deep modeling approach to articulatory inversion
Pandit et al. Automatic speech recognition of Gujarati digits using artificial neural network