[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Wang et al., 2013 - Google Patents

Articulatory distinctiveness of vowels and consonants: A data-driven approach

Wang et al., 2013

View HTML
Document ID
4877382646159936870
Author
Wang J
Green J
Samal A
Yunusova Y
Publication year

External Links

Snippet

Purpose To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach. Method Tongue and lip movements of 8 vowels and 11 consonants from 10 …
Continue reading at www.ncbi.nlm.nih.gov (HTML) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00268Feature extraction; Face representation
    • G06K9/00281Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/001Teaching or communicating with blind persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Detecting, measuring or recording for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features

Similar Documents

Publication Publication Date Title
Wang et al. Articulatory distinctiveness of vowels and consonants: A data-driven approach
Rudzicz et al. The TORGO database of acoustic and articulatory speech from speakers with dysarthria
Lee et al. Biosignal sensors and deep learning-based speech recognition: A review
Wang et al. An optimal set of flesh points on tongue and lips for speech-movement classification
Kim et al. Speaker-independent silent speech recognition from flesh-point articulatory movements using an LSTM neural network
Denby et al. Silent speech interfaces
Sahni et al. The tongue and ear interface: a wearable system for silent speech recognition
Engwall Analysis of and feedback on phonetic features in pronunciation training with a virtual teacher
Fabre et al. Automatic animation of an articulatory tongue model from ultrasound images of the vocal tract
Cao et al. Articulation-to-Speech Synthesis Using Articulatory Flesh Point Sensors' Orientation Information.
Davis et al. Audio‐visual interactions with intact clearly audible speech
Wang et al. Word recognition from continuous articulatory movement time-series data using symbolic representations
Freitas et al. An introduction to silent speech interfaces
Vojtech et al. Surface electromyography–based recognition, synthesis, and perception of prosodic subvocal speech
Smith et al. Infant-directed visual prosody: Mothers’ head movements and speech acoustics
Aghaahmadi et al. Clustering Persian viseme using phoneme subspace for developing visual speech application
Kim et al. Preliminary test of a wireless magnetic tongue tracking system for silent speech interface
Kim et al. Multiview Representation Learning via Deep CCA for Silent Speech Recognition.
Wang et al. Across-speaker articulatory normalization for speaker-independent silent speech recognition
Borsky et al. Classification of voice modes using neck-surface accelerometer data
Cao et al. Comparing the performance of individual articulatory flesh points for articulation-to-speech synthesis
de Menezes et al. A method for lexical tone classification in audio-visual speech
Mostafa et al. Voiceless Bangla vowel recognition using sEMG signal
Tan et al. Extracting spatial muscle activation patterns in facial and neck muscles for silent speech recognition using high-density sEMG
Tippannavar et al. Advances and Challenges in Human Emotion Recognition Systems: A Comprehensive Review