Ouni et al., 2016 - Google Patents
Is markerless acquisition technique adequate for speech production?Ouni et al., 2016
View HTML- Document ID
- 12913038462439988138
- Author
- Ouni S
- Dahmani S
- Publication year
- Publication venue
- The Journal of the Acoustical Society of America
External Links
Snippet
2. Methods We compare two markerless systems: PS Carmine and Intel RS. The main difference of the two systems is the frame rate (30 fps vs 60 fps) and to some extent the depth sensor range of the camera. We use as a reference system a marker-based tracker …
- 238000000034 method 0 title abstract description 20
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
- H04N5/225—Television cameras; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7275227B2 (en) | Recording virtual and real objects in mixed reality devices | |
Pouw et al. | The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking | |
CA3029123A1 (en) | Positional audio assignment system | |
Danner et al. | Quantitative analysis of multimodal speech data | |
CN110505399A (en) | Control method, device and the acquisition terminal of Image Acquisition | |
CN107924392A (en) | Annotation based on posture | |
Chen et al. | Gestonhmd: Enabling gesture-based interaction on low-cost vr head-mounted display | |
US10534963B2 (en) | Systems and methods for identifying video highlights based on audio | |
JP2000352996A (en) | Information processing device | |
US9207761B2 (en) | Control apparatus based on eyes and method for controlling device thereof | |
KR20140146750A (en) | Method and system for gaze-based providing education content | |
CN105247453A (en) | Virtual and augmented reality instruction system | |
KR20120072244A (en) | System and method for integrating gesture and sound for controlling device | |
Yargıç et al. | A lip reading application on MS Kinect camera | |
Burger et al. | Synchronizing eye tracking and optical motion capture: How to bring them together | |
Du et al. | Human–robot collaborative control in a virtual-reality-based telepresence system | |
Ouni et al. | Is markerless acquisition technique adequate for speech production? | |
Gao et al. | Sonicface: Tracking facial expressions using a commodity microphone array | |
Jayagopi et al. | The vernissage corpus: A multimodal human-robot-interaction dataset | |
Nirme et al. | Motion capture-based animated characters for the study of speech–gesture integration | |
CN109246412A (en) | A kind of operating room record system and method, operating room | |
Stefanidis et al. | 3D technologies and applications in sign language | |
Overholt et al. | A multimodal system for gesture recognition in interactive music performance | |
Sui et al. | A 3D audio-visual corpus for speech recognition | |
Caridakis et al. | A multimodal corpus for gesture expressivity analysis |