Kang et al., 2023 - Google Patents
IPS: Integrating Pose with Speech for enhancement of body pose estimation in VR remote collaborationKang et al., 2023
- Document ID
- 4683233895527519735
- Author
- Kang S
- Jeon S
- Woo W
- Publication year
- Publication venue
- 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
External Links
Snippet
We propose a Speech-Pose integration method to overcome the limitation of existing body pose estimation. Unlike previous Speech-based gesture generation method, our proposal reflects the user's actual pose using a vision-based system and speech as a subsidiary …
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for entering handwritten data, e.g. gestures, text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00268—Feature extraction; Face representation
- G06K9/00281—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00335—Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Canal et al. | A real-time human-robot interaction system based on gestures for assistive scenarios | |
US10901500B2 (en) | Eye gaze for spoken language understanding in multi-modal conversational interactions | |
US10664060B2 (en) | Multimodal input-based interaction method and device | |
Maurtua et al. | Natural multimodal communication for human–robot collaboration | |
Jaimes et al. | Multimodal human computer interaction: A survey | |
Morency et al. | Head gestures for perceptual interfaces: The role of context in improving recognition | |
Rossi et al. | An extensible architecture for robust multimodal human-robot communication | |
Liang et al. | Barehanded music: real-time hand interaction for virtual piano | |
Peral et al. | Efficient hand gesture recognition for human-robot interaction | |
Gaschler et al. | Modelling state of interaction from head poses for social human-robot interaction | |
Stoeva et al. | Body language in affective human-robot interaction | |
McColl et al. | Affect detection from body language during social HRI | |
Liu et al. | Obstacle avoidance through gesture recognition: Business advancement potential in robot navigation socio-technology | |
Yin | Real-time continuous gesture recognition for natural multimodal interaction | |
Kang et al. | IPS: Integrating Pose with Speech for enhancement of body pose estimation in VR remote collaboration | |
Jindal et al. | A comparative analysis of established techniques and their applications in the field of gesture detection | |
Al Moubayed et al. | The furhat social companion talking head. | |
Akmeliawati et al. | Assistive technology for relieving communication lumber between hearing/speech impaired and hearing people | |
Suganya et al. | Design Of a Communication aid for physically challenged | |
Helmert et al. | Design and Evaluation of an AR Voice-based Indoor UAV Assistant for Smart Home Scenarios | |
Vidya et al. | Gesture-based control of presentation slides using OpenCV | |
Vultur | Performance Analysis of “Drive Me”-a Human Robot Interaction System | |
Tung et al. | Multi-party human-machine interaction using a smart multimodal digital signage | |
Hanheide et al. | Combining environmental cues & head gestures to interact with wearable devices | |
Lai et al. | Intuitive multi-modal human-robot interaction via posture and voice |