Normoyle et al., 2024 - Google Patents
Using LLMs to Animate Interactive Story Characters with Emotions and PersonalityNormoyle et al., 2024
- Document ID
- 1698348142021037605
- Author
- Normoyle A
- Sedoc J
- Durupinar F
- Publication year
- Publication venue
- 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
External Links
Snippet
Animating performances for story-based games is a difficult and labor-intensive task. Although much research in animation and intelligent agents has focused on the problem of generating animation from textual descriptions, this work explores a novel approach through …
- 230000008451 emotion 0 title abstract description 16
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
- G10L2021/105—Synthesis of the lips movements from speech, e.g. for talking heads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/004—Artificial life, i.e. computers simulating life
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/027—Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bhattacharya et al. | Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents | |
Yoon et al. | Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots | |
KR102720491B1 (en) | Template-based generation of 3D object meshes from 2D images | |
Busso et al. | Rigid head motion in expressive speech animation: Analysis and synthesis | |
Sonlu et al. | A conversational agent framework with multi-modal personality expression | |
Marsella et al. | Virtual character performance from speech | |
Hanson et al. | Upending the uncanny valley | |
CN110688911A (en) | Video processing method, device, system, terminal equipment and storage medium | |
US20120130717A1 (en) | Real-time Animation for an Expressive Avatar | |
Thomas et al. | Investigating how speech and animation realism influence the perceived personality of virtual characters and agents | |
Normoyle et al. | Using LLMs to Animate Interactive Story Characters with Emotions and Personality | |
Bozkurt et al. | Affect-expressive hand gestures synthesis and animation | |
Corradini et al. | Animating an interactive conversational character for an educational game system | |
Čereković et al. | Multimodal behavior realization for embodied conversational agents | |
Filntisis et al. | Video-realistic expressive audio-visual speech synthesis for the Greek language | |
Gebhard et al. | Coloring multi-character conversations through the expression of emotions | |
Nichols et al. | I can’t believe that happened!: exploring expressivity in collaborative storytelling with the tabletop robot haru | |
Barbulescu et al. | A generative audio-visual prosodic model for virtual actors | |
Lee et al. | Designing an expressive avatar of a real person | |
Filntisis et al. | Photorealistic adaptation and interpolation of facial expressions using HMMS and AAMS for audio-visual speech synthesis | |
Khan | An Approach of Lip Synchronization With Facial Expression Rendering for an ECA | |
Mukashev et al. | Facial expression generation of 3D avatar based on semantic analysis | |
Corradini et al. | Towards believable behavior generation for embodied conversational agents | |
Guillermo et al. | Emotional 3D speech visualization from 2D audio visual data | |
Cordar et al. | Making virtual reality social |