Wang et al., 2013 - Google Patents
Articulatory distinctiveness of vowels and consonants: A data-driven approachWang et al., 2013
View HTML- Document ID
- 4877382646159936870
- Author
- Wang J
- Green J
- Samal A
- Yunusova Y
- Publication year
External Links
Snippet
Purpose To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach. Method Tongue and lip movements of 8 vowels and 11 consonants from 10 …
- 238000004458 analytical method 0 abstract description 53
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00268—Feature extraction; Face representation
- G06K9/00281—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/001—Teaching or communicating with blind persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Detecting, measuring or recording for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Articulatory distinctiveness of vowels and consonants: A data-driven approach | |
Rudzicz et al. | The TORGO database of acoustic and articulatory speech from speakers with dysarthria | |
Lee et al. | Biosignal sensors and deep learning-based speech recognition: A review | |
Wang et al. | An optimal set of flesh points on tongue and lips for speech-movement classification | |
Kim et al. | Speaker-independent silent speech recognition from flesh-point articulatory movements using an LSTM neural network | |
Denby et al. | Silent speech interfaces | |
Sahni et al. | The tongue and ear interface: a wearable system for silent speech recognition | |
Engwall | Analysis of and feedback on phonetic features in pronunciation training with a virtual teacher | |
Fabre et al. | Automatic animation of an articulatory tongue model from ultrasound images of the vocal tract | |
Cao et al. | Articulation-to-Speech Synthesis Using Articulatory Flesh Point Sensors' Orientation Information. | |
Davis et al. | Audio‐visual interactions with intact clearly audible speech | |
Wang et al. | Word recognition from continuous articulatory movement time-series data using symbolic representations | |
Freitas et al. | An introduction to silent speech interfaces | |
Vojtech et al. | Surface electromyography–based recognition, synthesis, and perception of prosodic subvocal speech | |
Smith et al. | Infant-directed visual prosody: Mothers’ head movements and speech acoustics | |
Aghaahmadi et al. | Clustering Persian viseme using phoneme subspace for developing visual speech application | |
Kim et al. | Preliminary test of a wireless magnetic tongue tracking system for silent speech interface | |
Kim et al. | Multiview Representation Learning via Deep CCA for Silent Speech Recognition. | |
Wang et al. | Across-speaker articulatory normalization for speaker-independent silent speech recognition | |
Borsky et al. | Classification of voice modes using neck-surface accelerometer data | |
Cao et al. | Comparing the performance of individual articulatory flesh points for articulation-to-speech synthesis | |
de Menezes et al. | A method for lexical tone classification in audio-visual speech | |
Mostafa et al. | Voiceless Bangla vowel recognition using sEMG signal | |
Tan et al. | Extracting spatial muscle activation patterns in facial and neck muscles for silent speech recognition using high-density sEMG | |
Tippannavar et al. | Advances and Challenges in Human Emotion Recognition Systems: A Comprehensive Review |