Xu et al., 2023 - Google Patents
An adaptive control framework based multi-modal information-driven dance composition model for musical robotsXu et al., 2023
View HTML- Document ID
- 8206778431364305640
- Author
- Xu F
- Xia Y
- Wu X
- Publication year
- Publication venue
- Frontiers in Neurorobotics
External Links
Snippet
Currently, most robot dances are pre-compiled, the requirement of manual adjustment of relevant parameters and meta-action to change the dancing to another type of music would greatly reduce its function. To overcome the gap, this study proposed a dance composition …
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Goodrich et al. | Teleoperation and beyond for assistive humanoid robots | |
US20110144804A1 (en) | Device and method for expressing robot autonomous emotions | |
Rázuri et al. | Automatic emotion recognition through facial expression analysis in merged images based on an artificial neural network | |
Castillo et al. | Emotion detection and regulation from personal assistant robot in smart environment | |
Fischer et al. | iCub-HRI: a software framework for complex human–robot interaction scenarios on the iCub humanoid robot | |
Horii et al. | Imitation of human expressions based on emotion estimation by mental simulation | |
Gomez Cubero et al. | The robot is present: Creative approaches for artistic expression with robots | |
Basori | Emotion walking for humanoid avatars using brain signals | |
Abe | Beyond anthropomorphising robot motion and towards robot-specific motion: consideration of the potential of artist—dancers in research on robotic motion | |
Teng et al. | Multidimensional deformable object manipulation based on DN-transporter networks | |
Kerzel et al. | Nicol: A neuro-inspired collaborative semi-humanoid robot that bridges social interaction and reliable manipulation | |
Röning et al. | Minotaurus: A system for affective human–robot interaction in smart environments | |
Fang et al. | Data-driven heuristic dynamic programming with virtual reality | |
Zhang et al. | Real-time learning and recognition of assembly activities based on virtual reality demonstration | |
Xu et al. | An adaptive control framework based multi-modal information-driven dance composition model for musical robots | |
Lim et al. | A Sign Language Recognition System with Pepper, Lightweight-Transformer, and LLM | |
Modler | Neural networks for mapping hand gestures to sound synthesis parameters | |
Wu et al. | A developmental evolutionary learning framework for robotic chinese stroke writing | |
Zhang et al. | Towards a Framework for Social Robot Co-speech Gesture Generation with Semantic Expression | |
Morasso et al. | Pinocchio: A language for action representation | |
Guo et al. | Locomotion skills for insects with sample‐based controller | |
Zhang et al. | A Multi-modal Virtual-Real Fusion System for Multi-task Human-Computer Interaction | |
Arnett et al. | Smart trashcan brothers: early childhood environmental education through green robotics | |
Zhao | Live Emoji: Semantic Emotional Expressiveness of 2D Live Animation | |
Cuan | Compelling Robot Behaviors through Supervised Learning and Choreorobotics |