Yap et al., 2009 - Google Patents
Visual word recognition of multisyllabic wordsYap et al., 2009
View PDF- Document ID
- 12074456701481742561
- Author
- Yap M
- Balota D
- Publication year
- Publication venue
- Journal of Memory and Language
External Links
Snippet
The visual word recognition literature has been dominated by the study of monosyllabic words in factorial experiments, computational models, and megastudies. However, it is not yet clear whether the behavioral effects reported for monosyllabic words generalize reliably …
- 230000000007 visual effect 0 title abstract description 34
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/20—Handling natural language data
- G06F17/27—Automatic analysis, e.g. parsing
- G06F17/2765—Recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/20—Handling natural language data
- G06F17/27—Automatic analysis, e.g. parsing
- G06F17/2705—Parsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/20—Handling natural language data
- G06F17/28—Processing or translating of natural language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
- G06F19/30—Medical informatics, i.e. computer-based analysis or dissemination of patient or disease data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yap et al. | Visual word recognition of multisyllabic words | |
Calzà et al. | Linguistic features and automatic classifiers for identifying mild cognitive impairment and dementia | |
Chuang et al. | The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using linear discriminative learning | |
Perry et al. | Beyond single syllables: Large-scale modeling of reading aloud with the Connectionist Dual Process (CDP++) model | |
Balota et al. | Megastudies: What do millions (or so) of trials tell us about lexical processing? | |
Bertini et al. | An automatic Alzheimer’s disease classifier based on spontaneous spoken English | |
Cortese et al. | Imageability and age of acquisition effects in disyllabic word recognition | |
Nettle | Social scale and structural complexity in human languages | |
Pimentel et al. | Phonotactic complexity and its trade-offs | |
Bürki et al. | Lexical representation of phonological variants: Evidence from pseudohomophone effects in different regiolects | |
Favaro et al. | Interpretable speech features vs. DNN embeddings: What to use in the automatic assessment of Parkinson’s disease in multi-lingual scenarios | |
Alderete et al. | Tone slips in Cantonese: Evidence for early phonological encoding | |
Kendall et al. | Considering performance in the automated and manual coding of sociolinguistic variables: Lessons from variable (ING) | |
Eden | Measuring phonological distance between languages | |
Johns | Mining a crowdsourced dictionary to understand consistency and preference in word meanings | |
Kandel et al. | Agreement attraction error and timing profiles in continuous speech | |
Lavechin et al. | Can statistical learning bootstrap early language acquisition? A modeling investigation | |
Ulicheva et al. | Phonotactic constraints: Implications for models of oral reading in Russian. | |
Deka et al. | AI-based automated speech therapy tools for persons with speech sound disorder: a systematic literature review | |
Tripathi et al. | Speech-based detection of multi-class Alzheimer’s disease classification using machine learning | |
Albright | Gradient phonological acceptability as a grammatical effect | |
Whetten et al. | Evaluating and improving automatic speech recognition using severity | |
Kelley et al. | The recognition of spoken pseudowords | |
Walker et al. | Connections and selections: Comparing multivariate predictions and parameter associations from latent variable models of picture naming | |
Tseng et al. | Model-assisted Lexical Tone Evaluation of three-year-old Chinese-speaking Children by also Considering Segment Production |