Nakadai et al., 2000 - Google Patents
Humanoid active audition systemNakadai et al., 2000
View PDF- Document ID
- 9175389237940313592
- Author
- Nakadai K
- Okuno H
- Laurens T
- Kitano H
- Publication year
- Publication venue
- IEEE-RAS international conference on humanoid robots
External Links
Snippet
Perception for humanoid should be active, eg, by moving its body or by controlling parameters of sensors such as cameras or microphones, to understand environments better. Active vision is one of the common capabilities of a humanoid. Active perception usually …
- 230000004438 eyesight 0 abstract description 54
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic or multiview television systems; Details thereof
- H04N13/02—Picture signal generators
- H04N13/0203—Picture signal generators using a stereoscopic image camera
- H04N13/0239—Picture signal generators using a stereoscopic image camera having two 2D image pickup sensors representing the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nakadai et al. | Active audition for humanoid | |
CN106664501B (en) | The systems, devices and methods of consistent acoustics scene reproduction based on the space filtering notified | |
Aarabi et al. | Robust sound localization using multi-source audiovisual information fusion | |
CN104106267B (en) | Signal enhancing beam forming in augmented reality environment | |
US10721521B1 (en) | Determination of spatialized virtual acoustic scenes from legacy audiovisual media | |
CA2784862C (en) | An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal | |
JP3627058B2 (en) | Robot audio-visual system | |
WO2002072317A1 (en) | Robot audiovisual system | |
US10542368B2 (en) | Audio content modification for playback audio | |
Nakadai et al. | Epipolar geometry based sound localization and extraction for humanoid audition | |
O'Donovan et al. | Microphone arrays as generalized cameras for integrated audio visual processing | |
JP7170069B2 (en) | AUDIO DEVICE AND METHOD OF OPERATION THEREOF | |
Chen et al. | Novel-view acoustic synthesis | |
Ban et al. | Exploiting the complementarity of audio and visual data in multi-speaker tracking | |
JP3632099B2 (en) | Robot audio-visual system | |
Okuno et al. | Sound and visual tracking for humanoid robot | |
JP3843743B2 (en) | Robot audio-visual system | |
Nakadai et al. | Humanoid active audition system | |
Song et al. | Personal 3D audio system with loudspeakers | |
JP3843741B2 (en) | Robot audio-visual system | |
Pfreundtner et al. | (W) Earable Microphone Array and Ultrasonic Echo Localization for Coarse Indoor Environment Mapping | |
Okuno et al. | Sound and visual tracking for humanoid robot | |
Catalbas et al. | Dynamic speaker localization based on a novel lightweight R–CNN model | |
JP3843742B2 (en) | Robot audio-visual system | |
Okuno et al. | Sound and Visual Tracking by Active Audition |