Shiratori et al., 2006 - Google Patents
Dancing‐to‐music character animationShiratori et al., 2006
View PDF- Document ID
- 633537909590779081
- Author
- Shiratori T
- Nakazawa A
- Ikeuchi K
- Publication year
- Publication venue
- Computer Graphics Forum
External Links
Snippet
In computer graphics, considerable research has been conducted on realistic human motion synthesis. However, most research does not consider human emotional aspects, which often strongly affect human motion. This paper presents a new approach for synthesizing dance …
- 230000033764 rhythmic process 0 abstract description 74
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids transforming into visible information
- G10L2021/105—Synthesis of the lips movements from speech, e.g. for talking heads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signal, using source filter models or psychoacoustic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/281—Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
- G10H2240/295—Packet switched network, e.g. token ring
- G10H2240/305—Internet or TCP/IP protocol use for any electrophonic musical instrument data or musical parameter transmission purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/075—Musical metadata derived from musical analysis or for use in electrophonic musical instruments
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shiratori et al. | Dancing‐to‐music character animation | |
Ofli et al. | Learn2dance: Learning statistical music-to-dance mappings for choreography synthesis | |
Fan et al. | Example-based automatic music-driven conventional dance motion synthesis | |
Lee et al. | Music similarity-based approach to generating dance motion sequence | |
Zhu et al. | Quantized gan for complex music generation from dance videos | |
Kao et al. | Temporally guided music-to-body-movement generation | |
Emmerson et al. | Expanding the horizon of electroacoustic music analysis | |
Sadoughi et al. | Creating prosodic synchrony for a robot co-player in a speech-controlled game for children | |
KR102192210B1 (en) | Method and Apparatus for Generation of LSTM-based Dance Motion | |
Fukayama et al. | Music content driven automated choreography with beat-wise motion connectivity constraints | |
Shiratori et al. | Synthesis of dance performance based on analyses of human motion and music | |
Cardle et al. | Music-driven motion editing: Local motion transformations guided by music analysis | |
Lin et al. | A human-computer duet system for music performance | |
Liu et al. | MusicFace: Music-driven expressive singing face synthesis | |
Misra et al. | A new paradigm for sound design | |
Bogaers et al. | Music-driven animation generation of expressive musical gestures | |
Shiratori et al. | Synthesizing dance performance using musical and motion features | |
CN113178182A (en) | Information processing method, information processing device, electronic equipment and storage medium | |
Sunardi et al. | Music to motion: using music information to create expressive robot motion | |
Tang et al. | Anidance: Real-time dance motion synthesize to the song | |
Grill | Perceptually informed organization of textural sounds | |
Delgado et al. | A new dataset for amateur vocal percussion analysis | |
Shiu et al. | Robust on-line beat tracking with kalman filtering and probabilistic data association (kf-pda) | |
Wang | Multimodal robotic music performance art based on GRU-GoogLeNet model fusing audiovisual perception | |
Ganguli et al. | An approach to adding knowledge constraints to a data-driven generative model for Carnatic rhythm sequence |