[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3411109.3412300acmotherconferencesArticle/Chapter ViewAbstractPublication PagesamConference Proceedingsconference-collections
research-article

Fast synthesis of perceptually adequate room impulse responses from ultrasonic measurements

Published: 16 September 2020 Publication History

Abstract

Audio augmented reality (AAR) applications need to render virtual sounds with acoustic effects that match the real environment of the user to create an experience with strong sense of presence. This audio rendering process can be formulated as the convolution between the dry sound signal and the room impulse response (IR) that covers the audible frequency spectrum (20Hz - 20kHz). While the IR can be pre-calculated in virtual reality (VR) scenes, AR applications need to continuously estimate it. We propose a method to synthesize room IRs based on the corresponding IR in the ultrasound frequency band (20kHz - 22kHz) and two parameters we propose in this paper: slope factor and RT60 ratio. We assess the synthesized IRs using common acoustic metrics and we conducted a user study to evaluate participants' perceptual similarity between the sounds rendered with the synthesized IR and with the recorded IR in different rooms. The method requires only a small number of pre-measurements in the environment to determine the synthesis parameters and it uses only inaudible signals at runtime for fast IR synthesis, making it well suited for interactive AAR applications.

References

[1]
ISO 3382-1. 2009. Acoustics---Measurement of room acoustic parameters---Part 1: Performance spaces.
[2]
Jont B Allen and David A Berkley. 1979. Image Method for Efficiently Simulating Small-Room Acoustics. The Journal of the Acoustical Society of America 65, 4 (1979), 943--950.
[3]
Will Bailey and Bruno Fazenda. 2018. The Effect of Visual Cues and Binaural Rendering Method on Plausibility in Virtual Environments. In Audio Engineering Society Convention 144. Audio Engineering Society.
[4]
Vincent Becker, Linus Fessler, and Gábor Sörös. 2019. GestEar: Combining Audio and Motion Sensing for Gesture Recognition on Smartwatches. In ACM ISWC. London, UK.
[5]
Ingolf Bork. 2005. Report on the 3rd Round Robin on Room Acoustical Computer Simulation-Part I: Measurements. Acta Acustica united with Acustica 91, 4 (2005), 740--752.
[6]
Claus Lynge Christensen, George Koutsouris, and Jens Holger Rindel. 2013. The ISO 3382 Parameters: Can We Simulate Them? Can We Measure Them?. In ISRA. 9--11.
[7]
Aki Harma, Julia Jakka, Miikka Tikander, Matti Karjalainen, Tapio Lokki, and Heli Nironen. 2003. Techniques and Applications of Wearable Augmented Reality Audio. In Audio Engineering Society Convention 114. Audio Engineering Society.
[8]
Vedad Hulusic, Carlo Harvey, Kurt Debattista, Nicolas Tsingos, Steve Walker, David Howard, and Alan Chalmers. 2012. Acoustic Rendering and Auditory-Visual Cross-Modal Perception and Interaction. In Computer Graphics Forum, Vol. 31. Wiley Online Library, 102--131.
[9]
Hansung Kim, Luca Hernaggi, Philip JB Jackson, and Adrian Hilton. 2019. Immersive Spatial Audio Reproduction for VR/AR using Room Acoustic Modelling from 360 Images. In IEEE VR. 120--126.
[10]
Hansung Kim, Richard J Hughes, Luca Remaggi, Philip JB Jackson, Adrian Hilton, Trevor J Cox, and Ben Shirley. 2017. Acoustic Room Modelling using A Spherical Camera for Reverberant Spatial Audio Objects. In Audio Engineering Society Convention 142. Audio Engineering Society.
[11]
S Kopuz and N Lalor. 1995. Analysis of Interior Acoustic Fields using the Finite Element Method and the Boundary Element Method. Applied Acoustics 45, 3 (1995), 193--210.
[12]
H Kuttruff. 2000. Room Acoustics, UK.
[13]
Gierad Laput, Karan Ahuja, Mayank Goel, and Chris Harrison. 2018. Ubicoustics: Plug-and-play acoustic activity recognition. In ACM UIST. 213--224.
[14]
Pontus Larsson, Aleksander Väljamäe, Daniel Västfjäll, Ana Tajadura-Jiménez, and Mendel Kleiner. 2010. Auditory-Induced Presence in Mixed Reality Environments and Related Technology. In The Engineering of Mixed Reality Systems. Springer, 143--163.
[15]
Dingzeyu Li, Timothy R Langlois, and Changxi Zheng. 2018. Scene-Aware Audio for 360 Videos. ACM TOG 37, 4 (2018), 1--12.
[16]
Zihou Meng, Fengjie Zhao, and Mu He. 2006. The Just Noticeable Difference of Noise Length and Reverberation Perception. In IEEE ISCIT. IEEE, 418--421.
[17]
Nikunj Raghuvanshi and John Snyder. 2014. Parametric Wave Field Coding for Precomputed Sound Propagation. ACM TOG 33, 4 (2014), 1--11.
[18]
Nikunj Raghuvanshi and John Snyder. 2018. Parametric Directional Coding for Precomputed Sound Propagation. ACM TOG 37, 4 (2018), 1--14.
[19]
Lauri Savioja, Dinesh Manocha, and M Lin. 2010. Use of GPUs in Room Acoustic Modeling and Auralization. In ISRA. 3.
[20]
Carl Schissler, Christian Loftin, and Dinesh Manocha. 2017. Acoustic Classification and Optimization for Multi-Modal Rendering of Real-World Scenes. IEEE TVCG 24, 3 (2017), 1246--1259.
[21]
Carl Schissler and Dinesh Manocha. 2016. Interactive Sound Propagation and Rendering for Large Multi-Source Scenes. ACM TOG 36, 4 (2016), 1.
[22]
Manfred R Schroeder. 1965. New method of measuring reverberation time. The Journal of the Acoustical Society of America 37, 6 (1965), 1187--1188.
[23]
Zhenyu Tang, Nicholas J Bryan, Dingzeyu Li, Timothy R Langlois, and Dinesh Manocha. 2020. Scene-Aware Audio Rendering via Deep Acoustic Analysis. IEEE TVCG 26, 5 (2020), 1991--2001.
[24]
Nicolas Tsingos, Wenyu Jiang, and Ian Williams. 2011. Using Programmable Graphics Hardware for Acoustics and Audio Rendering. Journal of the Audio Engineering Society 59, 9 (2011), 628--646.
[25]
Vesa Valimaki, Julian D Parker, Lauri Savioja, Julius O Smith, and Jonathan S Abel. 2012. Fifty years of artificial reverberation. IEEE Transactions on Audio, Speech, and Language Processing 20, 5 (2012), 1421--1448.
[26]
Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses using Only ANOVA Procedures. In ACM CHI. 143--146.
[27]
Jing Yang, Yves Frank, and Gábor Sörös. 2019. Hearing Is Believing: Synthesizing Spatial Audio from Everyday Objects to Users. In ACM AH. 1--9.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
AM '20: Proceedings of the 15th International Audio Mostly Conference
September 2020
281 pages
ISBN:9781450375634
DOI:10.1145/3411109
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 September 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. auditory perception
  2. augmented reality
  3. room acoustic effects
  4. room impulse response
  5. ultrasound

Qualifiers

  • Research-article

Conference

AM'20
AM'20: Audio Mostly 2020
September 15 - 17, 2020
Graz, Austria

Acceptance Rates

AM '20 Paper Acceptance Rate 29 of 47 submissions, 62%;
Overall Acceptance Rate 177 of 275 submissions, 64%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 109
    Total Downloads
  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)2
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media