Aylett et al., 2022 - Google Patents
Peter 2.0: Building a CyborgAylett et al., 2022
- Document ID
- 7807526673485426429
- Author
- Aylett M
- Shapiro A
- Prasad S
- Nachman L
- Marcella S
- Scott-Morgan P
- Publication year
- Publication venue
- Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments
External Links
Snippet
Peter Scott-Morgan has MND/ALS. He is now paralyzed and depends on technology to keep him alive and communicate with others. In this paper we outline the design and creation of unique communication system driven by an open source eye-tracking interface (ACAT) …
- 238000005516 engineering process 0 abstract description 18
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
- G10L2021/105—Synthesis of the lips movements from speech, e.g. for talking heads
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/004—Artificial life, i.e. computers simulating life
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022048403A1 (en) | Virtual role-based multimodal interaction method, apparatus and system, storage medium, and terminal | |
Cassell et al. | Beat: the behavior expression animation toolkit | |
JP7500582B2 (en) | Real-time generation of talking animation | |
CN110688911A (en) | Video processing method, device, system, terminal equipment and storage medium | |
WO2022106654A2 (en) | Methods and systems for video translation | |
JP7381581B2 (en) | machine interaction | |
Katayama et al. | Situation-aware emotion regulation of conversational agents with kinetic earables | |
DeCarlo et al. | Making discourse visible: Coding and animating conversational facial displays | |
Beskow et al. | OLGA-a dialogue system with an animated talking agent. | |
DeCarlo et al. | Specifying and animating facial signals for discourse in embodied conversational agents | |
Gjaci et al. | Towards culture-aware co-speech gestures for social robots | |
Li et al. | A survey of computer facial animation techniques | |
Čereković et al. | Multimodal behavior realization for embodied conversational agents | |
Tang et al. | Real-time conversion from a single 2D face image to a 3D text-driven emotive audio-visual avatar | |
Beskow | Talking heads-communication, articulation and animation | |
US20240323332A1 (en) | System and method for generating and interacting with conversational three-dimensional subjects | |
Aylett et al. | Peter 2.0: Building a Cyborg | |
De Melo et al. | Multimodal expression in virtual humans | |
Kshirsagar et al. | Multimodal animation system based on the MPEG-4 standard | |
Chandrasiri et al. | Internet communication using real-time facial expression analysis and synthesis | |
Kolivand et al. | Realistic lip syncing for virtual character using common viseme set | |
Volkova et al. | A robot commenting texts in an emotional way | |
Gonzalez et al. | Passing an enhanced Turing test–interacting with lifelike computer representations of specific individuals | |
Altarawneh et al. | Leveraging Cloud-based Tools to Talk with Robots. | |
Nakatsu | Nonverbal information recognition and its application to communications |