[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2000019415A2 - Method and apparatus for three-dimensional audio display - Google Patents

Method and apparatus for three-dimensional audio display Download PDF

Info

Publication number
WO2000019415A2
WO2000019415A2 PCT/US1999/022259 US9922259W WO0019415A2 WO 2000019415 A2 WO2000019415 A2 WO 2000019415A2 US 9922259 W US9922259 W US 9922259W WO 0019415 A2 WO0019415 A2 WO 0019415A2
Authority
WO
WIPO (PCT)
Prior art keywords
signals
functions
audio
audio signal
encoded
Prior art date
Application number
PCT/US1999/022259
Other languages
French (fr)
Other versions
WO2000019415A3 (en
Inventor
Jean-Marc Jot
Scott Wardle
Original Assignee
Creative Technology Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Creative Technology Ltd. filed Critical Creative Technology Ltd.
Priority to AU64006/99A priority Critical patent/AU6400699A/en
Priority to US09/806,193 priority patent/US7231054B1/en
Publication of WO2000019415A2 publication Critical patent/WO2000019415A2/en
Publication of WO2000019415A3 publication Critical patent/WO2000019415A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/02Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/01Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems

Definitions

  • the present invention relates generally to audio recording, and more specifically to the mixing, recording and playback of audio signals for reproducing real or virtual three-dimensional sound scenes at the eardrums of a listener using loudspeakers or headphones.
  • a well-known technique for artificially positioning a sound in a multi-channel loudspeaker playback system consists of weighting an audio signal by a set of amplifiers feeding each loudspeaker individually.
  • This method described e. g. in [Chowning71] is often referred to as "discrete amplitude panning" when only the loudspeakers closest to the target direction are assigned non-zero weights, as illustrated by the graph of panning functions in Fig. 1.
  • Fig. 1 shows a two- dimensional loudspeaker layout, the method can be extended with no difficulty to three-dimensional loudspeaker layouts, as described e. g. in [Pulkki97].
  • a drawback of this technique is that it requires a high number of channels to provide a faithful reproduction of all directions.
  • Another drawback is that the geometrical layout of the loudspeakers must be known at the encoding and mixing stage.
  • 3-D audio reproduction techniques which specifically aim at reproducing the acoustic pressure at the two ears of a listener are usually termed binaural techniques.
  • a binaural recording can be produced by inserting miniature microphones in the ear canals of an individual or dummy head.
  • Binaural encoding of an audio signal (also called binaural synthesis) can be performed by applying to a sound signal a pair of left and right filters modeling the head-related transfer functions (HRTFs) measured on an individual or a dummy head for a given direction.
  • HRTFs head-related transfer functions
  • a HRTF can be modeled as a cascaded combination of a delaying element and a minimum-phase filter, for each of the left and right channels.
  • a binaurally encoded or recorded signal is suitable for playback over headphones.
  • a cross-talk canceller is used, as described e. g. in [Gardner97].
  • FIR finite impulse response
  • the HRTF can only be measured at a set of discrete positions around the head. Designing a binaural synthesis system which can faithfully reproduce any direction and smooth dynamic movements of sounds is a challenging problem involving interpolation techniques and time-variant filters, implying an additional computational effort.
  • the binaurally recorded or encoded signal contains features related to the morphology of the torso, head, and pinnae. Therefore the fidelity of the reproduction is compromised if the listener's head is not identical to the head used in the recording or the HRTF measurements. In headphone playback, this can cause artifacts such as an artificial elevation of the sound, front-back confusions or inside-the-head localization.
  • the listener In reproduction over two loudspeakers, the listener must be located at a specific position for lateral sound locations to be convincingly reproduced (beyond the azimuth of the loudspeakers), while rear or elevated sound locations cannot be reproduced reliably.
  • [Travis96] describes a method for reducing the computational cost of the binaural synthesis and addresses the interpolation and dynamic issues.
  • This method consists of combining a panning technique designed for N-channel loudspeaker playback and a set of N static binaural synthesis filter pairs to simulate N fixed directions (or "virtual loudspeakers") for playback over headphones.
  • This technique leads to the topology of Fig. 4a, where a bank of binaural synthesis filters is applied after panning and mixing of the source signals.
  • An alternative approach, described in [Gehring96] consists of applying the binaural synthesis filters before panning and mixing, as illustrated in Fig. 4b.
  • the filtered signals can be produced off-line and stored so that only the panning and mixing computations need to be performed in real time. In terms of reproduction fidelity, these two approaches are equivalent. Both suffer from the inherent limitations of the multi-channel positioning techniques. Namely, they require a large number of encoding channels to faithfully reproduce the localization and timbre of sound signals in any direction. [Lowe95] describes a variation of the topology of Fig. 4a, in which the directional encoder generates a set of two-channel (left and right) audio signals, with a direction- dependent time delay introduced between the left and right channels, and each two- channel signal is panned between front, back and side "azimuth placement" filters.
  • [Chen96] uses an analysis method known as principal component analysis (PCA) to model any set of HRTFs as a weighted sum of frequency-dependent functions weighted by functions of direction.
  • PCA principal component analysis
  • the two sets of functions are listener- specific (uniquely associated to the head on which the HRTF were measured) and can be used to model the left filter and the right filter applied to the source signal in the directional encoder.
  • [Abel97] also shows the topologies of Figs. 4a and 4b and uses a singular value decomposition (SVD) technique to model a set of HRTFs in a manner essentially equivalent to the method described in [Chen96], resulting in the simultaneous solution for a set of filters and the directional panning functions.
  • SVD singular value decomposition
  • a method for positioning an audio signal includes selecting a set of spatial functions and providing a set of amplifiers. The gains of the amplifiers being dependent on scaling factors associated with the spatial functions. An audio signal is received and a direction for the audio signal is determined. The scaling factors are adjusted depending on the direction. The amplifiers are applied to the audio signal to produce first encoded signals. The audio signal is then delayed. The second filters are then applied to the delayed signal to produce second encoded signals. The resulting encoded signals contain directional information.
  • the spatial functions are the spherical harmonic functions.
  • the spherical harmonics may include zero-order and first-order harmonics and higher order harmonics.
  • the spatial functions include discrete panning functions.
  • a decoding of the directionally encoded audio includes providing a set of filters. The filters are defined based on the selected spatial functions.
  • An audio recording apparatus includes first and second multiplier circuits having adjustable gains.
  • a source of an audio signal is provided, the audio signal having a time-varying direction associated therewith.
  • the gains are adjusted based on the direction for the audio.
  • a delay element inserts a delay into the audio signal.
  • the audio and delayed audio are processed by the multiplier circuits, thereby creating directionally encoded signals.
  • an audio recording system comprises a pair of soundfield microphones for recording an audio source. The soundfield microphones are spaced apart at the positions of the ears of a notional listener.
  • a method for decoding includes deriving a set of spectral functions from preselected spatial functions.
  • the resulting spectral functions are the basis for digital filters which comprise the decoder.
  • a decoder comprising digital filters.
  • the filters are defined based on the spatial functions selected for the encoding of the audio signal.
  • the filters are arranged to produce output signals suitable for feeding into loudspeakers.
  • the present invention provides an efficient method for 3-D audio encoding and playback of multiple sound sources based on the linear decomposition of HRTF using spatial panning functions and spectral functions, which guarantees accurate reproduction of ITD cues for all sources over the whole frequency range uses predetermined panning functions.
  • predetermined panning functions offers the following advantages over methods of the prior art which use principal components analysis or singular value decomposition to determine panning functions and spectral functions: efficient implementation in hardware or software non-individual encoding/recording format adaptation of the decoder to the listener improved multi-channel loudspeaker playback
  • Spherical harmonics allow to make recordings using available microphone technology (a pair of Soundfield microphones) yield a recording format that is a superset of the B format standard associated to a special decoding technique for multi-channel loudspeaker playback
  • Figure 1 Discrete panning over 4 loudspeakers. Example of discrete panning functions.
  • Figure 2 B-format encoding and recording. Playback over 6 loudspeakers using Ambisonic decoding.
  • Figure 3 Binaural encoding and recording. Playback over 2 speakers using cross-talk cancellation.
  • Figure 4 (a) Post-filtering topology, (b) Pre-filtering topology.
  • Figure 5 (a) Post-filtering and (b) pre-filtering topologies, with control of interaural time difference for each sound source.
  • Figure 6 Binaural B Format encoding with decoding for playback over over headphones.
  • Figure 7 Original and reconstructed HRTF with Binaural B Format (first-order reconstruction).
  • Figure 8 Binaural B Format reconstruction filters (amplitude frequency response).
  • Figure 9 Binaural B Format decoder for playback over 4 speakers.
  • Figure 10 Binaural Discrete Panning using 6 encoding channels, with decoder for playback over 2 speakers with cross-talk cancellation.
  • Figure 11 Binaural Discrete Panning using 6 encoding channels, with decoder for playback over 4 speakers with cross-talk cancellation.
  • the procedure for modeling HRTF according to the present invention is as follows. This procedure is associated to the topologies described in Fig. 5a and Fig. 5b for directionnally encoding one or several audio signals and decoding them for playback over headphones.
  • Equalization removing a common transfer function from all HRTFs measured on one ear.
  • This transfer function can include the effect of the measuring apparatus, loudspeaker, and microphones used. It can also be the delay- free HRTF L (or R) measured for one particular direction (free-field equalization), or a transfer function representing an average of all the delay-free HRTFs L (or R) measured over all positions (diffuse-field equalization).
  • each HRTF is represented as a complex frequency response sampled at a given number of frequencies over a limited frequency range, or, equivalently, as a temporal impulse response sampled at a given sample rate.
  • the HRTF set ⁇ L(6 p , ⁇ p ,f) ⁇ or ⁇ R(6 p , ⁇ p ,j) ⁇ is represented, in the above decomposition, as a complex function of frequency in which every sample is a function of the spatial variables 6 and ⁇ , and this function is represented as a weighted combination of the spatial functions g t (6, ⁇ ).
  • Step 2 is optional and is associated to the binaural synthesis topologies described in Figs. 5a and 5b, where the delays t L (6, ⁇ ) and t R (6, ⁇ ) are introduced in the directional encoding module for each sound source. If step 2 is not applied, the binaural synthesis topologies of Figs. 4a and 4b can be used.
  • Figs. 5a and 5b will provide a higher fidelity with fewer encoding channels. It will be noted that adding or subtracting a common delay offset to t L (6, ⁇ ) and t R (6, ) in the encoding module will have no effect over the perceived direction of sounds during playback, even if the delay offset varies with direction, as long as the interaural time delay difference (ITD), defined below, is preserved for each direction.
  • ITD interaural time delay difference
  • ITD(6, ⁇ ) t R (6, ⁇ ) - t L (6, ⁇ ).
  • the spatial panning functions cannot be chosen a priori.
  • the technique in accordance with the present invention permits a priori selection of the spatial functions, from which the spectral functions are derived.
  • several benefits of the present invention will result from the possibility of choosing the panning functions a priori and from using a variety of techniques to derive the associated reconstruction filters.
  • An immediate advantage of the invention is that the encoding format in which sounds are mixed in Fig. 5a is devoid of listener specific features. As discussed below, it is possible, without causing major degradations in reproduction fidelity, to use a listener-independent model of the ITD in carrying out the invention. Generally, it is possible to make a selection of spatial panning functions and tune the reconstruction filters to achieve practical advantages such as: enabling improved reproduction over multi-channel loudspeaker systems, enabling the production of microphone recordings, preserving a high fidelity of reproduction in chosen directions or regions of space even with a low number of channels.
  • Any transfer function H(f) can be uniquely decomposed into its all-pass component and its minimum-phase component as follows:
  • H(J) exp(j ⁇ (f)) H mm (j) where ⁇ (f), called the excess-phase function of H(f), is defined by
  • ⁇ (f) Arg(H( )) - Re( ⁇ ilbert(-Log
  • the interaural time delay difference, ITD(6 p , ⁇ p p ), can be defined, for each direction (6 p , ⁇ p ), by a linear approximation of the interaural excess-phase difference:
  • this approximation may be replaced by various alternative methods of estimating the ITD, including time-domain methods such as methods using the cross- correlation function of the left and right HRTFs or methods using a threshold detection technique to estimate an arrival time at each ear.
  • time-domain methods such as methods using the cross- correlation function of the left and right HRTFs or methods using a threshold detection technique to estimate an arrival time at each ear.
  • Another possibility is to use a formula for modeling the variation of ITD vs. direction. For instance,
  • ITD(6, ⁇ ) r/c [ arcsin(cos( ⁇ ) sin(£)) + cos( ) sin( ⁇ S) ],
  • the value of the radius r can be chosen so that ITD(6 p , ⁇ p ) is as large as possible without exceeding the value derived from the linear approximation of the interaural excess-phase difference.
  • the value of ITD(6 p , ⁇ p ) can be rounded to the closest integer number of samples, or the interaural excess-phase difference may be approximated by the combination of a delay unit and a digital all-pass filter.
  • the delay- free HRTFs, L(6 p , ⁇ p , j) and R(6 p , ⁇ p , J), from which the reconstruction filters L t ( ) and R t (J) will be derived, can be identical, respectively, to the minimum- phase HRTF L min (6 p , ⁇ p ,f) and R min (6 p , ⁇ p ,f).
  • spherical harmonics include: mathematically tractable, closed form -> interpolation between directions mutually orthogonal spatial interpretation (e. g. front-back difference) facilitates recording
  • Fig. 6 illustrates this method in the case where the minimum-phase HRTFs are decomposed over spherical harmonics limited to zero and first order.
  • the directional encoding of the input signal producesan 8-channel encoded signal herein referred to as a "Binaural B Format" encoded signal.
  • the mixer provides for mixing of additional source signals, including synthesized sources.
  • 8 filters are used to decode this format into a binaural output signal.
  • the method can be extended to include any or all of the above higher-order spherical harmonics. Using the higher orders provides for more accurate reconstruction of HRTFs, especially at high frequencies (above 3 kHz).
  • a Soundfield microphone produces B format encoded signals.
  • a Soundfield microphone can be characterized by a set of spherical harmonic functions.
  • encoding a sound in accordance with the invention to produce Binaural B Format encoded signals simulates a free-field recording using two Soundfield microphones located at the notional position of the two ears. This simulation is exact if the directional encoder provides ITD according to the following free-field model:
  • the Binaural B Format recording technique is compatible with currently existing 8- channel digital recording technology.
  • the recording can be decoded for reproduction over headphones through the bank of 8 filters Lff) and R t (f) shown on Fig. 6, or decoded over two or more loudspeakers using methods to be described below.
  • additional sources can be encoded in Binaural B Format and mixed into the recording.
  • the Binaural B Format offers the additional advantage that the set of four left or right channels can be used with conventional Ambisonic decoders for loudspeaker playback.
  • Other advantages of using spherical harmonics as the spatial panning functions in carrying out the invention will be apparent in connection to multi-channel loudspeaker playback, offering an improved fidelity of 3-D audio reproduction compared to Ambisonic techniques.
  • the derivation of the N reconstruction filters Lff) will be illustrated in the case where the spatial panning functions g, ⁇ 6 p , ⁇ p ) are spherical harmonics.
  • the methods described are general and apply regardless of the choice of spatial functions.
  • the problem is to find, for a given frequency (or time) a set of complex scalars Lff) so that the linear combination of the spatial functions g, ⁇ 6 p , ⁇ p ) weighted by the Lff) approximates the spatial variation of the HRTF L(6 p , ⁇ p ,f) at that frequency (or time).
  • This problem can be conveniently represented by the matrix equation
  • each spatial panning function g ⁇ 6 p , ⁇ p defines the Rx 1 vector G
  • the matrix G is the PxN matrix whose columns are the vectors G
  • ⁇ g b ⁇ k > l/(4 ⁇ ) g ⁇ 6, ⁇ ) g k (6, ⁇ ) cos( ⁇ ) d ⁇ d ⁇ by
  • the original data are diffuse-field equalized HRTFs derived from measurements on a dummy head. Due to the limitation to first-order harmonics, the reconstruction matches the original magnitude spectra reasonably well up to about 2 or 3 kHz, but the performance tends to degrade with increasing frequency. For large-scale applications, a gentle degradation at high frequencies can be acceptable, since inter-individual differences in HRTFs typically become prominent at frequencies above 5 kHz.
  • the frequency responses of the reconstruction filters obtained in this case are shown on Fig. 8.
  • An advantage of a recording mad in accordance with the invention over a conventional two-channel dummy head recording is that, unlike prior art encoded signals, binaural B format encoded signals do not contain spectral HRTF features. These features are only introduced at the decoding stage by the reconstruction filters /,,-( ). Contrary to a conventional binaural recording, a Binaural B Format recording allows listener-specific adaptation at the reproduction stage, in order to reduce the occurrence of artifacts such as front-back reversals and in-head or elevated localization of frontal sound events.
  • Listener-specific adaptation can be achieved even more effectively in the context of a real-time digital mixing system.
  • the technique of the present invention readily lends itself to a real-time mixing approach and can be conveniently implemented as it only involves the correction of the head radius r for the synthesis of ITD cues and the adaptation of the four reconstruction filters L,(f). If diffuse-field equalization is applied to the headphones and to the measured HRTF, and therefore to the reconstruction filters L, f), the adaptation only needs to address direction- dependent features related to the morphology of the listener, rather than variations in HRTF measurement apparatus and conditions.
  • An advantage of discrete panning functions fewer operations needed in encoding module (multiplying by panning weight and adding into the mix is only necessary for the encoding channels which have non-zero weights).
  • each discrete panning function covers a particular region of space, and admits a "principal direction" (the direction for which the panning weight reaches 1). Therefore, a suitable reconstruction filter can be the HRTF corresponding to that principal direction. This will guarantee exact reconstruction of the HRTF for that particular direction.
  • a combination of the principal direction and the nearest directions can be used to derive the reconstruction filter.
  • the set of reconstruction filters obtained according to the present invention will provide a two-channel output signal suitable for high-fidelity 3D audio playback over headphones.
  • this two channel signal can be further processed through a cross-talk cancellation network in order to provide a two-channel signal suitable for playback over two loudspeakers placed in front of the listener.
  • This technique can produce convincing lateral sound images over a frontal pair of loudspeakers, covering azimuths up to about ⁇ 120°.
  • lateral sound images tend to collapse into the loudspeakers in response to rotations and translations of the listener's head.
  • the technique is also less effective for sound events assigned to rear or elevated positions, even when the listener sits at the "sweet spot".
  • Fig. 9 illustrates how, in the case of spherical harmonic panning functions, the reconstruction filters L t (f) can be utilized to provide improved reproduction over multi-channel loudspeaker playback systems.
  • An advantage of the Binaural B Format is that it contains information for discriminating rear sounds from frontal sounds. This property can be exploited in order to overcome the limitations of 2-channel transaural reproduction, by decoding over a 4-channel loudspeaker setup.
  • the 4-channel decoding network shown in Fig. 9, makes use of the sum and difference of the FFand f signals.
  • the binaural signal is decomposed as follows:
  • L(6, ⁇ ,f) LF(b, ⁇ ,J) + LB(6, ⁇ ,f)
  • LF and LB are the "front" and "back” binaural signals, defined by:
  • the network of Fig. 9 is designed to eliminate front-back confusions, by reproducing frontal sounds over the front loudspeakers and rear sounds over the rear loudspeakers, while elevated or lateral sounds are reproduced via both pairs of loudspeakers.
  • Fig. 11 illustrates how the present invention, applied with discrete panning functions, can be advantageously used to provide three-dimensional audio playback over two loudspeakers placed in front of the listener, with cross-talk cancellation.
  • the reconstruction filters and the cross-talk cancellation networks are free-field equalized, for each ear, with respect to the direction of the closest loudspeaker.
  • L ilj L(6 i , ⁇ i ,f) I L(6 j , ⁇ j ,f);
  • Fig. 1 1 illustrates how the decoder of Fig. 10 can be modified to offer further improved three-dimensional audio reproduction over four loudspeakers arranged in a front pair and a rear pair.
  • the method used is similar to the method used in the system of Fig. 9, in that a front cross-talk canceller and a rear cross-talk canceller are used, and they receive different combinations of the left and right encoded signals. These combinations are designed so that frontal sounds are reproduced over the front loudspeakers and rear sounds are reproduced over the rear loudspeakers, while elevated or lateral sounds are reproduced via both pairs of loudspeakers.
  • Fig. 1 1 illustrates how the decoder of Fig. 10 can be modified to offer further improved three-dimensional audio reproduction over four loudspeakers arranged in a front pair and a rear pair.
  • the method used is similar to the method used in the system of Fig. 9, in that a front cross-talk canceller and a rear cross-talk canceller are used, and they receive different combinations of the left and
  • FIG. 11 shows an embodiment of the present invention using 6 encoding channel for each ear, where channels 1 and 2 are front left and right channels, channels 5 and 4 are rear left and right channels, and channels 3 and 6 are lateral and/or elevated channels.
  • a particular advantageous property of this embodiment is that, if an audio signal is panned towards the direction of one of the four loudspeakers (corresponding to the principal direction of one of the channels 1, 2, 4, or 5), it is fed with no modification to that loudspeaker and cancelled out from the output feeding the three other loudspeakers. It is noted that, generally, the systems of Fig. 10 or Fig.
  • 11 can be extended to include larger numbers of encoding channels without departing from the principles characterizing the present invention, and that, among these encoding channels, one or more can have their principal direction outside of the horizontal plane so as to provide the reproduction of elevated sounds or of sounds located below the horizontal plane.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

This invention addresses sound recording and mixing methods for 3-D audio rendering of multiple sound sources over headphones or loudspeaker playback systems. Economical techniques are provided, whereby directional panning and mixing of sounds are performed in a multi-channel encoding format which preserves interaural time difference information and does not contain head-related spectral information. Decoders are provided for converting the multi-channel encoded signal into signals for playback over headphones or various loudspeaker arrangements. These decoders ensure faithful reproduction of directional auditory information at the eardrums of the listener and can be adapted to the number and geometrical layout of the loudspeakers and the individual characteristics of the listener. A particular multi-channel encoding format is disclosed, which, in addition to the above advantages, is associated with a practical microphone technique for producing 3-D audio recordings compliant with the decoders described.

Description

METHOD AND APPARATUS FOR THREE-DIMENSIONAL AUDIO DISPLAY
FIELD OF THE INVENTION The present invention relates generally to audio recording, and more specifically to the mixing, recording and playback of audio signals for reproducing real or virtual three-dimensional sound scenes at the eardrums of a listener using loudspeakers or headphones.
BACKGROUND A well-known technique for artificially positioning a sound in a multi-channel loudspeaker playback system consists of weighting an audio signal by a set of amplifiers feeding each loudspeaker individually. This method, described e. g. in [Chowning71], is often referred to as "discrete amplitude panning" when only the loudspeakers closest to the target direction are assigned non-zero weights, as illustrated by the graph of panning functions in Fig. 1. Although Fig. 1 shows a two- dimensional loudspeaker layout, the method can be extended with no difficulty to three-dimensional loudspeaker layouts, as described e. g. in [Pulkki97]. A drawback of this technique is that it requires a high number of channels to provide a faithful reproduction of all directions. Another drawback is that the geometrical layout of the loudspeakers must be known at the encoding and mixing stage.
An alternative approach, described in [Gerzon85], consists of producing a 'B-Format' multi-channel signal and reproducing this signal over loudspeakers via an 'Ambisonic' decoder, as illustrated in Fig. 2. Instead of discrete panning functions, the B Format uses real-valued spherical harmonics. The zero-order spherical harmonic function is named W, while the three first-order harmonics are denoted X, Y, and Z. These functions are defined as follows:
W(β, φ) = \
X(6, φ) = cos( ) cos(6) Y(6, φ) = cos( ) sin(<5) Z(6, φ) = sin( ) where 6 and φ denote respectively the azimuth and elevation angles of the sound source with respect to the listener, expressed in radians. An advantage of this technique over the discrete panning method is that B Format encoding does not require knowledge of the loudspeaker layout, which is taken into account in the design of the decoder. A second advantage is that a real-world B-Format recording can be produced with practical microphone technology, known as the 'Soundfield Microphone' [Farrah79]. As illustrated in Fig. 2, this allows for combining microphone-encoded sounds with electronically encoded sounds to produce a single B-format recording. First-order Ambisonic decoders do not reconstruct the acoustic pressure information at the ears of the listener except at low frequencies (below about 700 Hz). As described e. g. in [Bamford95], the frequency range can be extended by increasing the order of spherical harmonics, but only at the expense of a higher number of encoding channels and loudspeakers.
3-D audio reproduction techniques which specifically aim at reproducing the acoustic pressure at the two ears of a listener are usually termed binaural techniques. This approach is illustrated in Fig. 3 and reviewed e. g. in [Jot95]. A binaural recording can be produced by inserting miniature microphones in the ear canals of an individual or dummy head. Binaural encoding of an audio signal (also called binaural synthesis) can be performed by applying to a sound signal a pair of left and right filters modeling the head-related transfer functions (HRTFs) measured on an individual or a dummy head for a given direction. As shown in Fig. 3, a HRTF can be modeled as a cascaded combination of a delaying element and a minimum-phase filter, for each of the left and right channels. A binaurally encoded or recorded signal is suitable for playback over headphones. For playback over loudspeakers, a cross-talk canceller is used, as described e. g. in [Gardner97].
Conventional binaural techniques can provide a more convincing 3-D audio reproduction, over headphones or loudspeakers, than the previously described techniques. However, they are not without their own drawbacks and difficulties.
Compared to discrete amplitude panning or B-Format encoding, binaural synthesis involves a significantly larger amount of computation for each sound source. An accurate finite impulse response (FIR) model of an HRTF typically requires a 1- ms long response, i. e. approximately 100 additions and multiplies per sample period at a sample rate of 48 kHz, which amounts to 5 MIPS (million instructions per second).
The HRTF can only be measured at a set of discrete positions around the head. Designing a binaural synthesis system which can faithfully reproduce any direction and smooth dynamic movements of sounds is a challenging problem involving interpolation techniques and time-variant filters, implying an additional computational effort.
The binaurally recorded or encoded signal contains features related to the morphology of the torso, head, and pinnae. Therefore the fidelity of the reproduction is compromised if the listener's head is not identical to the head used in the recording or the HRTF measurements. In headphone playback, this can cause artifacts such as an artificial elevation of the sound, front-back confusions or inside-the-head localization.
In reproduction over two loudspeakers, the listener must be located at a specific position for lateral sound locations to be convincingly reproduced (beyond the azimuth of the loudspeakers), while rear or elevated sound locations cannot be reproduced reliably.
[Travis96] describes a method for reducing the computational cost of the binaural synthesis and addresses the interpolation and dynamic issues. This method consists of combining a panning technique designed for N-channel loudspeaker playback and a set of N static binaural synthesis filter pairs to simulate N fixed directions (or "virtual loudspeakers") for playback over headphones. This technique leads to the topology of Fig. 4a, where a bank of binaural synthesis filters is applied after panning and mixing of the source signals. An alternative approach, described in [Gehring96], consists of applying the binaural synthesis filters before panning and mixing, as illustrated in Fig. 4b. The filtered signals can be produced off-line and stored so that only the panning and mixing computations need to be performed in real time. In terms of reproduction fidelity, these two approaches are equivalent. Both suffer from the inherent limitations of the multi-channel positioning techniques. Namely, they require a large number of encoding channels to faithfully reproduce the localization and timbre of sound signals in any direction. [Lowe95] describes a variation of the topology of Fig. 4a, in which the directional encoder generates a set of two-channel (left and right) audio signals, with a direction- dependent time delay introduced between the left and right channels, and each two- channel signal is panned between front, back and side "azimuth placement" filters. [Chen96] uses an analysis method known as principal component analysis (PCA) to model any set of HRTFs as a weighted sum of frequency-dependent functions weighted by functions of direction. The two sets of functions are listener- specific (uniquely associated to the head on which the HRTF were measured) and can be used to model the left filter and the right filter applied to the source signal in the directional encoder. [Abel97] also shows the topologies of Figs. 4a and 4b and uses a singular value decomposition (SVD) technique to model a set of HRTFs in a manner essentially equivalent to the method described in [Chen96], resulting in the simultaneous solution for a set of filters and the directional panning functions.
There remains a need for a computationally efficient technique for high-fidelity 3-D audio encoding and mixing of multiple audio signals. It is desirable to provide an encoding technique that produces a non listener-specific format. There is a need for a practical recording technique and suitably designed decoders to provide faithful reproduction of the pressure signals at the ears of a listener, over headphones or two- channel and multi-channel loudspeaker playback systems.
SUMMARY OF THE INVENTION A method for positioning an audio signal includes selecting a set of spatial functions and providing a set of amplifiers. The gains of the amplifiers being dependent on scaling factors associated with the spatial functions. An audio signal is received and a direction for the audio signal is determined. The scaling factors are adjusted depending on the direction. The amplifiers are applied to the audio signal to produce first encoded signals. The audio signal is then delayed. The second filters are then applied to the delayed signal to produce second encoded signals. The resulting encoded signals contain directional information. In one embodiment of the invention, the spatial functions are the spherical harmonic functions. The spherical harmonics may include zero-order and first-order harmonics and higher order harmonics. In another embodiment, the spatial functions include discrete panning functions. Further in accordance with the method of the invention, a decoding of the directionally encoded audio includes providing a set of filters. The filters are defined based on the selected spatial functions.
An audio recording apparatus includes first and second multiplier circuits having adjustable gains. A source of an audio signal is provided, the audio signal having a time-varying direction associated therewith. The gains are adjusted based on the direction for the audio. A delay element inserts a delay into the audio signal. The audio and delayed audio are processed by the multiplier circuits, thereby creating directionally encoded signals. In one embodiment, an audio recording system comprises a pair of soundfield microphones for recording an audio source. The soundfield microphones are spaced apart at the positions of the ears of a notional listener.
According to the invention, a method for decoding includes deriving a set of spectral functions from preselected spatial functions. The resulting spectral functions are the basis for digital filters which comprise the decoder.
According to the invention, a decoder is provided comprising digital filters. The filters are defined based on the spatial functions selected for the encoding of the audio signal. The filters are arranged to produce output signals suitable for feeding into loudspeakers.
The present invention provides an efficient method for 3-D audio encoding and playback of multiple sound sources based on the linear decomposition of HRTF using spatial panning functions and spectral functions, which guarantees accurate reproduction of ITD cues for all sources over the whole frequency range uses predetermined panning functions.
The use of predetermined panning functions offers the following advantages over methods of the prior art which use principal components analysis or singular value decomposition to determine panning functions and spectral functions: efficient implementation in hardware or software non-individual encoding/recording format adaptation of the decoder to the listener improved multi-channel loudspeaker playback
Two particularly advantageous choices for the panning functions are detailed, offering additional benefits:
Spherical harmonics allow to make recordings using available microphone technology (a pair of Soundfield microphones) yield a recording format that is a superset of the B format standard associated to a special decoding technique for multi-channel loudspeaker playback
Discrete panning functions
- guarantees exact reproduction of chosen directions increased efficiency of implementation (by minimizing the number of non-zero panning weights for each source) associated to a special decoding technique for multi-channel loudspeaker playback
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 : Discrete panning over 4 loudspeakers. Example of discrete panning functions.
Figure 2: B-format encoding and recording. Playback over 6 loudspeakers using Ambisonic decoding.
Figure 3: Binaural encoding and recording. Playback over 2 speakers using cross-talk cancellation.
Figure 4: (a) Post-filtering topology, (b) Pre-filtering topology. Figure 5: (a) Post-filtering and (b) pre-filtering topologies, with control of interaural time difference for each sound source.
Figure 6: Binaural B Format encoding with decoding for playback over over headphones.
Figure 7: Original and reconstructed HRTF with Binaural B Format (first-order reconstruction).
Figure 8: Binaural B Format reconstruction filters (amplitude frequency response).
Figure 9: Binaural B Format decoder for playback over 4 speakers.
Figure 10: Binaural Discrete Panning using 6 encoding channels, with decoder for playback over 2 speakers with cross-talk cancellation.
Figure 11 : Binaural Discrete Panning using 6 encoding channels, with decoder for playback over 4 speakers with cross-talk cancellation.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Modeling HRTF using predetermined spatial functions
Given a set of N spatial panning functions {g,(6, φ), i - 0, 1,... N-1 } the procedure for modeling HRTF according to the present invention is as follows. This procedure is associated to the topologies described in Fig. 5a and Fig. 5b for directionnally encoding one or several audio signals and decoding them for playback over headphones.
1. Measuring HRTFs for a set of positions {(6p, φp),p = 1, 2,... P) . The sets of left- ear and right-ear HRTFs will be denoted, respectively, as:
{L(6p,
Figure imgf000009_0001
1, 2,... P, where/denotes frequency.
2. Extracting the left and right delays tL(6p, φp) and tR(6p, φp) for every position. Denoting T(6, φ,f) = exp(2πj/t(<$, φ)), the time-delay operator of duration t, expressed in the frequency domain, the left-ear and right-ear HRTFs are expressed by:
L(6p, φp,f) = TL(6p, φp,f) L(6p, φp,f),
R(6p, φp,f) = TR(6p, φp,f) R(6p, φp,f), forp = 1, 2,... P. 3. Equalization removing a common transfer function from all HRTFs measured on one ear. This transfer function can include the effect of the measuring apparatus, loudspeaker, and microphones used. It can also be the delay- free HRTF L (or R) measured for one particular direction (free-field equalization), or a transfer function representing an average of all the delay-free HRTFs L (or R) measured over all positions (diffuse-field equalization).
4. Symmetrization, whereby the HRTFs and the delays are corrected in order to verify the natural left-right symmetry relations:
R(6, φ,f) = L(2π-6, φ,f) and tL(6, φ) = tR(2π-6, φ).
5. Derivation of the set of reconstruction filters {£,( )} and {R,( )} satisfying the approximate equations:
L(6P, φp,f) = Σ{, = O,... N-I ) g p, ΨP) Lff),
R(6p, φp,f) ≡ ∑ i →^. gi 6p, φμ) RΛ ), forp = l, 2,... P.
In practice, the measured HRTFs are obtained in the digital domain. Each HRTF is represented as a complex frequency response sampled at a given number of frequencies over a limited frequency range, or, equivalently, as a temporal impulse response sampled at a given sample rate. The HRTF set {L(6p, φp,f)} or {R(6p, φp,j)} is represented, in the above decomposition, as a complex function of frequency in which every sample is a function of the spatial variables 6 and φ, and this function is represented as a weighted combination of the spatial functions gt(6, φ). As a result, a sampled complex function of frequency is associated to each spatial function gt(6, φ), which defines the sampled frequency response of the corresponding filter Lfj) or R j). It is noted that, due to the linearity of the Fourier transform, an equivalent decomposition would be obtained if the the frequency variable/ were replaced by the time variable in order to reconstruct the time-domain representation of the HRTF.
The equalization and the symmetrization of the HRTF sets L(6p, φ ,f) and R(6p, φp,f), are not necessary to carrying out the invention. However, performing these operations eliminates some of the artifacts associated to the HRTF measurement method. Thus, it may be preferable to perform these operations for their practical advantages. Step 2 is optional and is associated to the binaural synthesis topologies described in Figs. 5a and 5b, where the delays tL(6, φ) and tR(6, φ) are introduced in the directional encoding module for each sound source. If step 2 is not applied, the binaural synthesis topologies of Figs. 4a and 4b can be used. If the delay extraction procedure is appropriately performed (as discussed below) the topologies of Figs. 5a and 5b will provide a higher fidelity with fewer encoding channels. It will be noted that adding or subtracting a common delay offset to tL(6, φ) and tR(6, ) in the encoding module will have no effect over the perceived direction of sounds during playback, even if the delay offset varies with direction, as long as the interaural time delay difference (ITD), defined below, is preserved for each direction.
ITD(6, φ) = tR(6, φ) - tL(6, φ).
It is noted that the above procedure differs from the methods of the prior art. Conventional analytical techniques, such as PCA and SVD, simultaneously produce the spectral functions and the spatial functions which minimize the least-squares error between the original HRTFs and the reconstructed HRTFs for a given number of channels N. In the elaboration of the present invention, it is recognized in particular, that these earlier methods suffer from the following drawbacks:
The spatial panning functions cannot be chosen a priori.
The choice of error criterion to be minimized (mean squared error) enables the resolution of the approximation problem via tractable linear algebra. However, the technique does not guarantee that the model of the HRTF thus obtained is optimal in terms of perceived reproduction for a given number of encoding channels.
In comparison, the technique in accordance with the present invention permits a priori selection of the spatial functions, from which the spectral functions are derived. As will be apparent from the following description, several benefits of the present invention will result from the possibility of choosing the panning functions a priori and from using a variety of techniques to derive the associated reconstruction filters.
An immediate advantage of the invention is that the encoding format in which sounds are mixed in Fig. 5a is devoid of listener specific features. As discussed below, it is possible, without causing major degradations in reproduction fidelity, to use a listener-independent model of the ITD in carrying out the invention. Generally, it is possible to make a selection of spatial panning functions and tune the reconstruction filters to achieve practical advantages such as: enabling improved reproduction over multi-channel loudspeaker systems, enabling the production of microphone recordings, preserving a high fidelity of reproduction in chosen directions or regions of space even with a low number of channels.
Two particular choices of spatial panning functions will be detailed in this description: spherical harmonic functions and discrete panning functions. Practical methods for designing the set of reconstruction filters /,,( ) and R,( ) will be described in more detail. From the discussion which follows, it will be clear to a person of ordinary skill in the relevant art that other spatial functions can be used and that alternative techniques for producing the corresponding reconstruction filters are available.
Delay extraction techniques
The extraction of the interaural time delay difference, LTD(6p, φp), from the HRTF pair L(6p, φp,f) and R(6p, φp,f) is performed as follows.
Any transfer function H(f) can be uniquely decomposed into its all-pass component and its minimum-phase component as follows:
H(J) = exp(j ψ(f)) Hmm(j) where ψ(f), called the excess-phase function of H(f), is defined by
ψ(f) = Arg(H( )) - Re(Ηilbert(-Log|H( )D).
Applying this decomposition to the ΗRTFs L(6p, φp,f) and R(6p, φp,f), we obtain the corresponding excess-phase functions, ψR(6p, φp, f) and ψL(6p, φp, f), and the corresponding minimum-phase ΗRTFs, Lmm(6p, φp,f) and Rmιn(6p, φp,f).
The interaural time delay difference, ITD(6p, <pp), can be defined, for each direction (6 p, φp), by a linear approximation of the interaural excess-phase difference:
Ψ ^ φ,f) ~ ΨL^ φ,J) ≡ 2πfITD(6, φ). In practice, this approximation may be replaced by various alternative methods of estimating the ITD, including time-domain methods such as methods using the cross- correlation function of the left and right HRTFs or methods using a threshold detection technique to estimate an arrival time at each ear. Another possibility is to use a formula for modeling the variation of ITD vs. direction. For instance,
• the spherical head model with diametrally opposite ears yields
ITD(6, φ) = r/c [ arcsin(cos(^) sin(£)) + cos( ) sin(<S) ],
• the free-field model -where the ears are represented by two points separated by the distance 2r- yields
LTD(6, φ) = 2r/c cos( ) sin(6), where c denotes the speed of sound. In these two formulas, the value of the radius r can be chosen so that ITD(6p, φp) is as large as possible without exceeding the value derived from the linear approximation of the interaural excess-phase difference. In a digital implementation, the value of ITD(6p, φp), can be rounded to the closest integer number of samples, or the interaural excess-phase difference may be approximated by the combination of a delay unit and a digital all-pass filter.
The delay- free HRTFs, L(6p, φp, j) and R(6p, φp, J), from which the reconstruction filters Lt( ) and Rt(J) will be derived, can be identical, respectively, to the minimum- phase HRTF Lmin(6p, φp,f) and Rmin(6p, φp,f).
Whatever the method used to extract or model the interaural time delay difference from the measured HRTF, it can be regarded as an approximation of the interaural excess-phase difference ψR(6, φ,f) ~ φL(6, φ,f) by a model function ψ(β, φ,f): ψR(β, φ,f) - φL(6, φ,f) = ψ(6, φ,f).
It may be advantageous, in order to improve the fidelity of the 3-D audio reproduction according to the present invention, to correct for the error made in this phase difference approximation, by incorporating the residual excess-phase difference into the delay-free HRTFs L(6p, φp,f) and R(6p, φp,f) as follows:
Uf) = Lmin(J) exp(j L(f)) and R(J) = Rmin(J) exp(j φR(J)),
where φL(f) and φR(j) satisfy R - Φάf) = ΨR - Ψάf) - V<6, <P,J), and either φL(j) = 0 or φR(j) = 0, as appropriate to ensure that the delay-free HRTFs L(6 p, φp,f) and R(6p, φp,f) are causal transfer functions.
Application of spherical harmonic functions for encoding and recording
General definition of spherical harmonics.
Of particular interest in the following description are the zero-order harmonic W and the first-order harmonics X, Y and Z defined earlier, as well as the second-order harmonics, U and V, and the third-order harmonics, S and T, defined below.
U(6, φ) = cos2( ) cos(25) V(6, φ) = cos2((s) sin(2<5) S(6, φ) = cos3( ) cos(3 ) T(6, φ) = cos3( ) sin(36)
Advantages of spherical harmonics include: mathematically tractable, closed form -> interpolation between directions mutually orthogonal spatial interpretation (e. g. front-back difference) facilitates recording Fig. 6 illustrates this method in the case where the minimum-phase HRTFs are decomposed over spherical harmonics limited to zero and first order. The directional encoding of the input signal producesan 8-channel encoded signal herein referred to as a "Binaural B Format" encoded signal. The mixer provides for mixing of additional source signals, including synthesized sources. Conversely, 8 filters are used to decode this format into a binaural output signal. The method can be extended to include any or all of the above higher-order spherical harmonics. Using the higher orders provides for more accurate reconstruction of HRTFs, especially at high frequencies (above 3 kHz).
As discussed above, a Soundfield microphone produces B format encoded signals. As such, a Soundfield microphone can be characterized by a set of spherical harmonic functions. Thus from Fig. 6, it can be seen that encoding a sound in accordance with the invention to produce Binaural B Format encoded signals, simulates a free-field recording using two Soundfield microphones located at the notional position of the two ears. This simulation is exact if the directional encoder provides ITD according to the following free-field model:
ITD(6, φ) = tR(6, φ) - tL(6, φ) = die cos(^) ύn(6), where d is the distance between the microphones. If the ITD model provided in the encoder takes into account the diffraction of sound around the head or a sphere, the encoded signal and the recorded signal will differ in the value of the ITD for sounds away from the median plane. This difference can be reduced, in practice, by adjusting the distance between the two microphones to be slightly larger than the distance between the two ears of the listener.
The Binaural B Format recording technique is compatible with currently existing 8- channel digital recording technology. The recording can be decoded for reproduction over headphones through the bank of 8 filters Lff) and Rt(f) shown on Fig. 6, or decoded over two or more loudspeakers using methods to be described below. Before decoding, additional sources can be encoded in Binaural B Format and mixed into the recording.
The Binaural B Format offers the additional advantage that the set of four left or right channels can be used with conventional Ambisonic decoders for loudspeaker playback. Other advantages of using spherical harmonics as the spatial panning functions in carrying out the invention will be apparent in connection to multi-channel loudspeaker playback, offering an improved fidelity of 3-D audio reproduction compared to Ambisonic techniques.
Derivation of the reconstruction filters
For clarity, the derivation of the N reconstruction filters Lff) will be illustrated in the case where the spatial panning functions g,{6p, φp) are spherical harmonics. However, the methods described are general and apply regardless of the choice of spatial functions.
The problem is to find, for a given frequency (or time) a set of complex scalars Lff) so that the linear combination of the spatial functions g,{6p, φp) weighted by the Lff) approximates the spatial variation of the HRTF L(6p, φp,f) at that frequency (or time). This problem can be conveniently represented by the matrix equation
L = GL, where
• the set of HRTF L(6p, φp, f) defines the Rx l vector L, P being the number of spatial directions
• each spatial panning function g{6p, φp) defines the Rx 1 vector G„ and the matrix G is the PxN matrix whose columns are the vectors G
• the set of reconstruction filters L J) defines the Nx 1 vector of unknowns L.
The solution which minimizes the energy of the error is given by the pseudo inversion
L = (Gτ G)A G1 L, where (Gτ G), known as the Gram matrix, is the NxN matrix formed by the dot products G(i, k) = G,τ Gk of the spatial vectors. The Gram matrix is diagonal if the spatial vectors are mutually orthogonal.
Simplest case: the sampled spatial functions are mutually orthogonal => filters are derived by orthogonal projection of the HRTF on the individual spatial functions (dot product computed at each frequency). Example: 2-D reproduction with regular azimuth sampling. If sampled functions are not mutually orthogonal, multiply by inverse of Gram matrix to ensure correct reconstruction.
Even when the panning functions g((6, φ) are mutually ortogonal, as is the case with spherical harmonics, the vectors G, obtained by sampling these functions may not be orthogonal. This happens typically if the spatial sampling is not uniform (as is often the case with 3-D HRTF measurements). This problem can be remedied by redefining the spatial dot product so as to approximate the continuous integral of the product of two spatial functions
< gb §k > = l/(4π) g{6, φ) gk(6, φ) cos(^) dβ dφ by
< g gk > : : ∑ , - !.... /•» g,<6p> <PP) gk{6p> <PP) dS(p) = G}~ Δ Gk where Δ is a diagonal PxP matrix with Δ(p, p) = dS(p), and dS(p) is proportional to a notional solid angle covered by the HRTF measured for the direction (6 , φ ). This definition yields the generalized pseudo inversion equation
L = (Gτ Δ G)A Gτ Δ L, where the diagonal matrix Δ can be used as a spatial weighting function in order to achieve a more accurate 3-D audio reproduction in certain regions of space compared to others, and the modified Gram matrix (Gτ Δ G) ensures that the solution minimizes the mean squared error.
Additional possibility: project on a subset of the chosen set of spatial functions using above methods. Then project the residual error over other spatial functions (cf aeslό). Example: to optimize fidelity of reconstruction in horizontal plane, project on W, X, Y first, and then project error on Z. Note that process can be iterated in more than 2 steps.
By combining the above techniques, it is possible, for a given set of spatial panning functions, to achieve control over chosen perceptual aspects of the 3-D audio reproduction, such as the front/back or up/down discrimination or the accuracy in particular regions of space.
Fig. 7 illustrates the performance of the method for reconstructing the HRTF magnitude spectra in the horizontal plane (φ = 0). For this reconstruction, only 3 channels per ear are necessary, since the Z channel is not used. The original data are diffuse-field equalized HRTFs derived from measurements on a dummy head. Due to the limitation to first-order harmonics, the reconstruction matches the original magnitude spectra reasonably well up to about 2 or 3 kHz, but the performance tends to degrade with increasing frequency. For large-scale applications, a gentle degradation at high frequencies can be acceptable, since inter-individual differences in HRTFs typically become prominent at frequencies above 5 kHz. The frequency responses of the reconstruction filters obtained in this case are shown on Fig. 8.
Adaptation of the reconstruction filters to the listener
An advantage of a recording mad in accordance with the invention over a conventional two-channel dummy head recording is that, unlike prior art encoded signals, binaural B format encoded signals do not contain spectral HRTF features. These features are only introduced at the decoding stage by the reconstruction filters /,,-( ). Contrary to a conventional binaural recording, a Binaural B Format recording allows listener-specific adaptation at the reproduction stage, in order to reduce the occurrence of artifacts such as front-back reversals and in-head or elevated localization of frontal sound events.
Listener-specific adaptation can be achieved even more effectively in the context of a real-time digital mixing system. Moreover, the technique of the present invention readily lends itself to a real-time mixing approach and can be conveniently implemented as it only involves the correction of the head radius r for the synthesis of ITD cues and the adaptation of the four reconstruction filters L,(f). If diffuse-field equalization is applied to the headphones and to the measured HRTF, and therefore to the reconstruction filters L, f), the adaptation only needs to address direction- dependent features related to the morphology of the listener, rather than variations in HRTF measurement apparatus and conditions.
Application of discrete panning functions
Definition: functions which minimize the number of non-zero panning weights for any direction: 2 weights in 2D and 3 weights in 3D. For each panning function, there is a direction where this panning function reaches unity and is the only non-zero panning function. Example given in Fig. 1 for 2D case. Many variations possible.
An advantage of discrete panning functions: fewer operations needed in encoding module (multiplying by panning weight and adding into the mix is only necessary for the encoding channels which have non-zero weights).
The projection techniques described above can be used to derive the reconstruction filters. Alternatively, it can be noted that each discrete panning function covers a particular region of space, and admits a "principal direction" (the direction for which the panning weight reaches 1). Therefore, a suitable reconstruction filter can be the HRTF corresponding to that principal direction. This will guarantee exact reconstruction of the HRTF for that particular direction. Alternatively, a combination of the principal direction and the nearest directions can be used to derive the reconstruction filter. When it is desired to design a 3D audio display system which offers maximum fidelity for certain directions of the sound, it is straightforward to design a set of panning functions which will admit these specific directions as principal directions.
Methods for playback over loudspeakers
When used in the topologies of Figs. 5a and 5b, the set of reconstruction filters obtained according to the present invention will provide a two-channel output signal suitable for high-fidelity 3D audio playback over headphones. As illustrated in Fig. 3, this two channel signal can be further processed through a cross-talk cancellation network in order to provide a two-channel signal suitable for playback over two loudspeakers placed in front of the listener. This technique can produce convincing lateral sound images over a frontal pair of loudspeakers, covering azimuths up to about ±120°. However, lateral sound images tend to collapse into the loudspeakers in response to rotations and translations of the listener's head. The technique is also less effective for sound events assigned to rear or elevated positions, even when the listener sits at the "sweet spot".
Fig. 9 illustrates how, in the case of spherical harmonic panning functions, the reconstruction filters Lt(f) can be utilized to provide improved reproduction over multi-channel loudspeaker playback systems. An advantage of the Binaural B Format is that it contains information for discriminating rear sounds from frontal sounds. This property can be exploited in order to overcome the limitations of 2-channel transaural reproduction, by decoding over a 4-channel loudspeaker setup. The 4-channel decoding network, shown in Fig. 9, makes use of the sum and difference of the FFand f signals.
The binaural signal is decomposed as follows:
L(6, φ,f) = LF(b, φ,J) + LB(6, φ,f) where LF and LB are the "front" and "back" binaural signals, defined by:
LF(6, φ, f) = 0.5 {[W(6, φ)+X(6, φ)\ [L^+Luf)] + Y(6, φ) Ltf) + Z(6, φ)
LB(6, φ, f) = 0.5 {[W(6, φ)-X(6, φ)} [L^-LM + Y(6, φ) EJJ) + Z(6, φ) It can be verified that LB = for (6, φ) = (0, 0) and that LF= 0 for (6, φ) = (π, 0). The network of Fig. 9 is designed to eliminate front-back confusions, by reproducing frontal sounds over the front loudspeakers and rear sounds over the rear loudspeakers, while elevated or lateral sounds are reproduced via both pairs of loudspeakers. This significantly improves the reproduction of lateral, rear or elevated sound images compared to a 2-channel loudspeaker setup (or to 4-channel loudspeaker reproduction using conventional pairwise amplitude panning or Ambisonic techniques). The listener is also allowed to move more freely than with 2-channel loudspeaker reproduction. By exploiting the Z component, a similar approach can be used to decode the binaural B format over a 3-D loudspeaker setup (comprising loudspeakers above or below the horizontal plane).
Fig. 11 illustrates how the present invention, applied with discrete panning functions, can be advantageously used to provide three-dimensional audio playback over two loudspeakers placed in front of the listener, with cross-talk cancellation. In this implementation of the invention, the discrete panning functions g (6, φ) and g2(6, φ) are chosen so that their principal directions coincide, respectively, with the directions of the left and right loudspeakers from the listener's head (the principal direction of the discrete panning function g{(6, φ) is defined as (6{, φ verifying g^, φ) - 1.0 and gj(6{, φ = 0 for j ≠ I). Furthermore, the reconstruction filters and the cross-talk cancellation networks are free-field equalized, for each ear, with respect to the direction of the closest loudspeaker. As a result of these conditions, it can be verified that, if an audio signal is panned to the direction of one of the two loudspeakers, it is fed with no modification to that loudspeaker and cancelled out from the output feeding the other loudspeaker. Therefore, the resulting loudspeaker playback system combines, in conjunction with the previously described advantages of the present invention, the advantage of conventional discrete panning systems and the advantages of binaural reproduction techniques using cross-talk cancellation.
The following notations are used in Fig. 10 and Fig. 11 :
• L ,ιj denotes the ratio of two delay- free HRTFs:
L ilj = L(6i, φi,f) I L(6j, φj,f);
• L yj denotes the ratio of two delay- free HRTFs combined with the time difference between them: L ,.j = exp(2πj [ t(6„ ft) - t(6p φ) ]) L(6t, φt,f) I L(6f, φ f).
Fig. 1 1 illustrates how the decoder of Fig. 10 can be modified to offer further improved three-dimensional audio reproduction over four loudspeakers arranged in a front pair and a rear pair. The method used is similar to the method used in the system of Fig. 9, in that a front cross-talk canceller and a rear cross-talk canceller are used, and they receive different combinations of the left and right encoded signals. These combinations are designed so that frontal sounds are reproduced over the front loudspeakers and rear sounds are reproduced over the rear loudspeakers, while elevated or lateral sounds are reproduced via both pairs of loudspeakers. Fig. 11 shows an embodiment of the present invention using 6 encoding channel for each ear, where channels 1 and 2 are front left and right channels, channels 5 and 4 are rear left and right channels, and channels 3 and 6 are lateral and/or elevated channels. A particular advantageous property of this embodiment is that, if an audio signal is panned towards the direction of one of the four loudspeakers (corresponding to the principal direction of one of the channels 1, 2, 4, or 5), it is fed with no modification to that loudspeaker and cancelled out from the output feeding the three other loudspeakers. It is noted that, generally, the systems of Fig. 10 or Fig. 11 can be extended to include larger numbers of encoding channels without departing from the principles characterizing the present invention, and that, among these encoding channels, one or more can have their principal direction outside of the horizontal plane so as to provide the reproduction of elevated sounds or of sounds located below the horizontal plane.

Claims

What is claimed is:
1. A method for positioning of an audio signal comprising steps of: selecting a set of spatial functions, each having an associated scaling factor; providing a first set of amplifiers and a second set of amplifiers, the gains of the amplifiers being a function of the scaling factors; receiving a first audio signal; providing a direction representing the direction of the source of the first audio signal; adjusting the scaling factors depending on the direction; applying the first set of amplifiers to the first audio signal to produce first encoded signals; delaying the first audio signal to produce a delayed audio signal; and applying the second set of amplifiers to the delayed audio signal to produce second encoded signals.
2. The method of claim 1 wherein the spatial functions are spherical harmonic functions.
3. The method of claim 2 wherein the spherical harmonic functions include at least the first-order harmonics.
4. The method of claim 1 wherein the spatial functions are discrete panning functions.
5. The method of claim 1 wherein for each of the first and second sets of amplifiers, the gain of each amplifier is based on the B-format encoding scheme.
6. The method of claim 1 further including: providing a third set of amplifiers and a fourth set of amplifiers, the gains of the amplifiers being a function of the scaling factors; receiving a second audio signal; providing a direction representing the direction of the source of the second audio signal; adjusting the scaling factors depending on the direction; applying the third set of amplifiers to the second audio signal to produce third encoded signals; delaying the second audio signal to produce a second delayed audio signal; applying the fourth set of amplifiers to the second delayed audio signal to produce fourth encoded signals; mixing the first and the third encoded signals, or the first and the fourth encoded signals; and mixing the second and the fourth encoded signals, or the second and the third encoded signals.
7. The method of claim 6 wherein the second signal is a synthesized audio signal.
8. The method of claim 1 further including a decoding the encoded signals, the decoder comprising filters defined based on the spatial functions.
9. An audio recording apparatus for directionally encoding an audio signal comprising: a source of an audio signal, the audio signal having a time-varying direction associated therewith; a first set of multiplier circuits, each having a gain factor adaptable according to a direction for the audio signal, each having an input to receive the audio source, each having an output; a delay element having an input coupled to the audio source and having an output; and a second set of multiplier circuits, each having a gain factor adaptable according to a direction for the audio signal, each having an input to receive the output of the delay element, each having an output; whereby the outputs of the first and second multiplier circuits comprise encoded audio signals.
10. The apparatus of claim 9 wherein the source includes a source of a synthesized audio signal.
1 1. The apparatus of claim 9 wherein the gain factors of the first and second multiplier circuits are based on spherical harmonic functions.
12. The apparatus of claim 11 wherein the spherical harmonic functions include at least zero- and first-order harmonics.
13. The apparatus of claim 9 wherein the gain factors of the first and second multiplier circuits are based on discrete panning functions.
14. The apparatus of claim 9 further including a data storage device having an interface effective for receiving and storing the outputs of the multiplier circuits.
15. A 3-dimensional audio recording system comprising: a first soundfield microphone to produce first directionally encoded audio signals; and a second soundfield microphone to produce second directionally encoded audio signals; the first and second soundfield microphones are proximate each other at the positions of the ears of a notional listener; wherein the first and second directionally encoded audio signals represent a 3- dimensional audio recording.
16. The system of claim 15 further including a storage device for storing the first and second directionally encoded audio signals.
17. The system of claim 16 further including A/D circuitry for converting outputs of the microphones to digital signals, whereby the digital signals can be stored on the storage device.
18. The system of claim 15 wherein the first and second microphones are spaced apart by a distance substantially equal to the width of a human head.
19. The system of claim 15 wherein the first and second soundfield microphones are characterized by a set of spatial functions, the system further including a decoder for receiving the first and second directionally encoded signals to produce an audio signal, the decoder comprising filters defined based on the spatial functions.
20. A method of producing an audio signal from directionally encoded audio signals comprising steps of: receiving directionally encoded audio signals according to a set of spatial functions; generating a set of spectral functions based on the spatial functions; providing a first set of decoding filters defined by left spectral functions; providing a second set of decoding filters defined by right spectral functions; applying the first decoding filters to the encoded audio signals to produce a left-channel audio signal; and applying the second decoding filters to the encoded audio signals to produce a right-channel audio signal.
21. The method of claim 20 wherein the set of spatial functions is defined by {gt(θ, φ), i = 0, 1, ... N-1 } and the step of generating the spectral functions includes providing L,(f) and Rf) such that ∑{1 _0> Ν.„ gtp, φp) L,(j) approximates L(θp, φp,f)
and ∑„ ,α N.„ gtp, φp) R,(f) approximates R(θp, φp,f), where L(θp, φp,f) is a set of left-ear HRTFs and R(θp, φp,f) is a set of right-ear HRTFs, where {(θp, φp),p = 1, 2, ... P } is a set of directions and/is frequency.
22. The method of claim 21 wherein L(θp, φp,f) and R(θp, φp,f) are delay- free HRTFs.
23. The method of claim 21 wherein providing Lff) includes solving, at each frequency the vector equation L ≥ GL„ where: the set of left-ear HRTFs L(θp, φp,f) define a Px 1 vector L, (7 is a RxN matrix whose columns are Rxl vectors G„ i - 0, 1, ... N-1 each of the Ν spatial functions g,(θp, φp) defines the vector G„ and the set of ,( ) defines the Nx l vector L.
24. The method of claim 23 wherein providing L,(f) is obtained by L = (GτG)~ 'GτL.
25. The method of claim 24 wherein providing Lβ) includes projecting a Pxl vector L formed by the set of left-ear HRTFs L(θp, φp,f) over each of Rx 1 vectors G, formed by the spatial functions g,(θp, φp) to compute the scalar product L,.
26. The method according to claim 25 wherein an Nx 1 vector L formed by the scalar products Z, is multiplied by the inverse of the Gram matrix GJG.
27. The method of claim 23 wherein providing L,(f) is obtained by L = (G1 Δ G) 'G^ΔE, where Δ is a diagonal RxR matrix where the P diagonal elements are weights applied to the individual directions (θp, φp),p = 1, 2, ... P.
28. The method of claim 20 where each weight is proportional to a solid angle associated with the corresponding direction.
29. The method of claim 28 wherein the spatial functions are spherical harmonic functions.
30. The method of claim 21 wherein the spherical harmonic functions include at least zero- and first-order harmonics.
31. The method of claim 20 wherein the spectral functions define filters L^ , LχW> Ly , and Lz(f), effective for decoding B-format encoded signals Wυ Xh Yυ Zh WR, XR, YR, and ZR, wherein the left-channel audio signal is defined by WL Lw + XL Lχ(f) + YL Ly + ZL Lz(f) and the right-channel audio signal is defined by WR L„ff) + XR Lχ - YR Lyφ + ZR Lz(f); whereby the left- and right-channel audio signals are suitable for playback with headphones.
32. The method of claim 20 wherein the spectral functions define filters Lιvφ, Lχ(f), Lyφ, and Lz(f) effective for decoding B-format encoded signals Wυ Xu Yυ Zυ WR, XR, YR, and ZR; wherein the left-channel audio signal comprises two signals a first signal LF = 0.5 {[WL+XJ [LiVφ -fL/fl] + YL Lyφ +ZL LM and a second signal LB = 0.5 {[WL-XJ [L f) -LJf)] + YL Ly +ZL L^}; and wherein the right-channel audio signal comprises two signals a first signal RF = 0.5 {[W.+XJ [L fi -LM + YR Lyφ +ZR Lz(f)} and a second signal RB = 0.5 {[WR-X,J [L^φ-L^J - YR Lyφ +ZR Lzφ); whereby the left- and right- channel audio signals are suitable for playback over a pair of front speakers and a pair of rear speakers.
33. The method of claim 32 further including: performing a first cross-talk cancellation on the LF and RE signals to feed the front speakers; and performing a second cross-talk cancellation on the LB and Rδ signals to feed the rear speakers.
34. The method of claim 20 wherein the spatial functions are discrete panning functions having a direction, called a principal direction, where the spatial function is maximum and wherein all other spatial functions are zero.
35. The method of claim 34 wherein the spectral function associated with each spatial function is the delay- free HRTF for the corresponding principal direction.
36. The method according to claims 34 or 35 wherein one or more of the spatial functions have their principal direction corresponding to the direction of one of the loudspeakers.
37. The method according to claims 33 or 36 including performing cross-talk cancellation of the left and right audio signals before feeding the loudspeakers.
38. The method of claims 34 or 35 further including: producing left-front and left-back signals based on the left-channel audio signal; producing right-front and right-back signals based on the right-channel audio signal; and combining the left-front, left-back, right-front, and right-back signals to produce outputs suitable for playback with a pair of front speakers and a pair of rear speakers.
39. The method of claim 38 further including: performing a first cross-talk cancellation on the left-front and right-front signals to feed the front speakers; and performing a second cross-talk cancellation on the left-back and right-back signals to feed the rear speakers.
40. The method of claim 39 wherein one or more of the spatial functions have their principal direction corresponding to the direction of the loudspeakers.
41. A method for reproducing an audio scene comprising: selecting set of spatial functions; producing directionally encoded audio signals including receiving a first audio source and applying the spatial functions to the first audio source to produce first encoded signals; and decoding the encoded audio signals, including generating spectral functions based on the first spatial functions and applying the spectral functions to the encoded audio signals.
42. The method of claim 41 further including delaying the first audio source to produce a delayed source, applying the spatial functions to the delayed source to produce second encoded signals, the first and second signals comprising directionally encoded audio signals.
43. The method of claim 41 wherein the step of producing directionally encoded audio signals further includes receiving a second audio source, applying the spatial functions to the second audio source to produce second encoded signals, and mixing the first and second encoded signals.
44. The method of claim 43 wherein the second audio source is a synthesized audio signal.
45. The method of claim 41 wherein the spatial functions are spherical harmonic functions.
46. The method of claims 45 wherein the spherical harmonic functions include at least zero- and first-order harmonics.
47. The method of claim 41 wherein the spatial functions are discrete panning functions.
48. The method of claim 41 wherein the step of applying the spectral functions to the directionally encoded audio signals includes providing a set of filters defined by the spectral functions and feeding the encoded audio signals into the filters to produce reconstructed audio signals.
49. The method of claim 41 further including performing a cross-talk cancellation operation on the reconstructed audio signals to produce output suitable for playback with speakers.
PCT/US1999/022259 1998-09-25 1999-09-24 Method and apparatus for three-dimensional audio display WO2000019415A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU64006/99A AU6400699A (en) 1998-09-25 1999-09-24 Method and apparatus for three-dimensional audio display
US09/806,193 US7231054B1 (en) 1999-09-24 1999-09-24 Method and apparatus for three-dimensional audio display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10188498P 1998-09-25 1998-09-25
US60/101,884 1998-09-25

Publications (2)

Publication Number Publication Date
WO2000019415A2 true WO2000019415A2 (en) 2000-04-06
WO2000019415A3 WO2000019415A3 (en) 2001-03-08

Family

ID=22286962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/022259 WO2000019415A2 (en) 1998-09-25 1999-09-24 Method and apparatus for three-dimensional audio display

Country Status (2)

Country Link
AU (1) AU6400699A (en)
WO (1) WO2000019415A2 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001062042A1 (en) * 2000-02-17 2001-08-23 Lake Technology Limited Virtual audio environment
WO2001082651A1 (en) * 2000-04-19 2001-11-01 Sonic Solutions Multi-channel surround sound mastering and reproduction techniques that preserve spatial harmonics in three dimensions
GB2379147A (en) * 2001-04-18 2003-02-26 Univ York Sound processing
US6904152B1 (en) 1997-09-24 2005-06-07 Sonic Solutions Multi-channel surround sound mastering and reproduction techniques that preserve spatial harmonics in three dimensions
FR2866974A1 (en) * 2004-03-01 2005-09-02 France Telecom Audio data processing method for e.g. documentary recording, involves encoding sound signals, and applying spatial component amplitude attenuation in frequency range defined by component order and distance between source and reference point
WO2007101958A2 (en) * 2006-03-09 2007-09-13 France Telecom Optimization of binaural sound spatialization based on multichannel encoding
WO2008039339A2 (en) 2006-09-25 2008-04-03 Dolby Laboratories Licensing Corporation Improved spatial resolution of the sound field for multi-channel audio playback systems by deriving signals with high order angular terms
EP2154911A1 (en) * 2008-08-13 2010-02-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An apparatus for determining a spatial output multi-channel audio signal
US7676047B2 (en) 2002-12-03 2010-03-09 Bose Corporation Electroacoustical transducing with low frequency augmenting devices
EP2268064A1 (en) * 2009-06-25 2010-12-29 Berges Allmenndigitale Rädgivningstjeneste Device and method for converting spatial audio signal
EP2285139A2 (en) 2009-06-25 2011-02-16 Berges Allmenndigitale Rädgivningstjeneste Device and method for converting spatial audio signal
US8139797B2 (en) 2002-12-03 2012-03-20 Bose Corporation Directional electroacoustical transducing
US8483413B2 (en) 2007-05-04 2013-07-09 Bose Corporation System and method for directionally radiating sound
EP2738962A1 (en) * 2012-11-29 2014-06-04 Thomson Licensing Method and apparatus for determining dominant sound source directions in a higher order ambisonics representation of a sound field
US9560448B2 (en) 2007-05-04 2017-01-31 Bose Corporation System and method for directionally radiating sound
WO2018213159A1 (en) * 2017-05-15 2018-11-22 Dolby Laboratories Licensing Corporation Methods, systems and apparatus for conversion of spatial audio format(s) to speaker signals
RU2694778C2 (en) * 2010-07-07 2019-07-16 Самсунг Электроникс Ко., Лтд. Method and device for reproducing three-dimensional sound
CN113362805A (en) * 2021-06-18 2021-09-07 四川启睿克科技有限公司 Chinese and English speech synthesis method and device with controllable tone and accent
US11277705B2 (en) 2017-05-15 2022-03-15 Dolby Laboratories Licensing Corporation Methods, systems and apparatus for conversion of spatial audio format(s) to speaker signals
US11363402B2 (en) 2019-12-30 2022-06-14 Comhear Inc. Method for providing a spatialized soundfield

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9100748B2 (en) 2007-05-04 2015-08-04 Bose Corporation System and method for directionally radiating sound

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995031881A1 (en) * 1994-05-11 1995-11-23 Aureal Semiconductor Inc. Three-dimensional virtual audio display employing reduced complexity imaging filters
US5638343A (en) * 1995-07-13 1997-06-10 Sony Corporation Method and apparatus for re-recording multi-track sound recordings for dual-channel playbacK
US5682461A (en) * 1992-03-24 1997-10-28 Institut Fuer Rundfunktechnik Gmbh Method of transmitting or storing digitalized, multi-channel audio signals

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5682461A (en) * 1992-03-24 1997-10-28 Institut Fuer Rundfunktechnik Gmbh Method of transmitting or storing digitalized, multi-channel audio signals
WO1995031881A1 (en) * 1994-05-11 1995-11-23 Aureal Semiconductor Inc. Three-dimensional virtual audio display employing reduced complexity imaging filters
US5638343A (en) * 1995-07-13 1997-06-10 Sony Corporation Method and apparatus for re-recording multi-track sound recordings for dual-channel playbacK

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7606373B2 (en) 1997-09-24 2009-10-20 Moorer James A Multi-channel surround sound mastering and reproduction techniques that preserve spatial harmonics in three dimensions
US6904152B1 (en) 1997-09-24 2005-06-07 Sonic Solutions Multi-channel surround sound mastering and reproduction techniques that preserve spatial harmonics in three dimensions
WO2001062042A1 (en) * 2000-02-17 2001-08-23 Lake Technology Limited Virtual audio environment
WO2001082651A1 (en) * 2000-04-19 2001-11-01 Sonic Solutions Multi-channel surround sound mastering and reproduction techniques that preserve spatial harmonics in three dimensions
JP2003531555A (en) * 2000-04-19 2003-10-21 ソニック ソリューションズ Multi-channel surround sound mastering and playback method for preserving 3D spatial harmonics
GB2379147A (en) * 2001-04-18 2003-02-26 Univ York Sound processing
GB2379147B (en) * 2001-04-18 2003-10-22 Univ York Sound processing
US8238578B2 (en) 2002-12-03 2012-08-07 Bose Corporation Electroacoustical transducing with low frequency augmenting devices
US8139797B2 (en) 2002-12-03 2012-03-20 Bose Corporation Directional electroacoustical transducing
US7676047B2 (en) 2002-12-03 2010-03-09 Bose Corporation Electroacoustical transducing with low frequency augmenting devices
WO2005096268A3 (en) * 2004-03-01 2006-06-08 France Telecom Method for processing audio data, in particular in an ambiophonic context
WO2005096268A2 (en) * 2004-03-01 2005-10-13 France Telecom Method for processing audio data, in particular in an ambiophonic context
FR2866974A1 (en) * 2004-03-01 2005-09-02 France Telecom Audio data processing method for e.g. documentary recording, involves encoding sound signals, and applying spatial component amplitude attenuation in frequency range defined by component order and distance between source and reference point
WO2007101958A3 (en) * 2006-03-09 2007-11-01 France Telecom Optimization of binaural sound spatialization based on multichannel encoding
US9215544B2 (en) 2006-03-09 2015-12-15 Orange Optimization of binaural sound spatialization based on multichannel encoding
WO2007101958A2 (en) * 2006-03-09 2007-09-13 France Telecom Optimization of binaural sound spatialization based on multichannel encoding
WO2008039339A3 (en) * 2006-09-25 2008-05-29 Dolby Lab Licensing Corp Improved spatial resolution of the sound field for multi-channel audio playback systems by deriving signals with high order angular terms
US8103006B2 (en) 2006-09-25 2012-01-24 Dolby Laboratories Licensing Corporation Spatial resolution of the sound field for multi-channel audio playback systems by deriving signals with high order angular terms
WO2008039339A2 (en) 2006-09-25 2008-04-03 Dolby Laboratories Licensing Corporation Improved spatial resolution of the sound field for multi-channel audio playback systems by deriving signals with high order angular terms
US8483413B2 (en) 2007-05-04 2013-07-09 Bose Corporation System and method for directionally radiating sound
US9560448B2 (en) 2007-05-04 2017-01-31 Bose Corporation System and method for directionally radiating sound
US8855320B2 (en) 2008-08-13 2014-10-07 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus for determining a spatial output multi-channel audio signal
US8824689B2 (en) 2008-08-13 2014-09-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus for determining a spatial output multi-channel audio signal
US8879742B2 (en) 2008-08-13 2014-11-04 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Apparatus for determining a spatial output multi-channel audio signal
EP2154911A1 (en) * 2008-08-13 2010-02-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An apparatus for determining a spatial output multi-channel audio signal
EP2285139A3 (en) * 2009-06-25 2016-10-12 Harpex Ltd. Device and method for converting spatial audio signal
EP2285139A2 (en) 2009-06-25 2011-02-16 Berges Allmenndigitale Rädgivningstjeneste Device and method for converting spatial audio signal
US8705750B2 (en) 2009-06-25 2014-04-22 Berges Allmenndigitale Rådgivningstjeneste Device and method for converting spatial audio signal
EP2268064A1 (en) * 2009-06-25 2010-12-29 Berges Allmenndigitale Rädgivningstjeneste Device and method for converting spatial audio signal
US10531215B2 (en) 2010-07-07 2020-01-07 Samsung Electronics Co., Ltd. 3D sound reproducing method and apparatus
RU2694778C2 (en) * 2010-07-07 2019-07-16 Самсунг Электроникс Ко., Лтд. Method and device for reproducing three-dimensional sound
EP2738962A1 (en) * 2012-11-29 2014-06-04 Thomson Licensing Method and apparatus for determining dominant sound source directions in a higher order ambisonics representation of a sound field
US9445199B2 (en) 2012-11-29 2016-09-13 Dolby Laboratories Licensing Corporation Method and apparatus for determining dominant sound source directions in a higher order Ambisonics representation of a sound field
WO2014082883A1 (en) * 2012-11-29 2014-06-05 Thomson Licensing Method and apparatus for determining dominant sound source directions in a higher order ambisonics representation of a sound field
WO2018213159A1 (en) * 2017-05-15 2018-11-22 Dolby Laboratories Licensing Corporation Methods, systems and apparatus for conversion of spatial audio format(s) to speaker signals
US11277705B2 (en) 2017-05-15 2022-03-15 Dolby Laboratories Licensing Corporation Methods, systems and apparatus for conversion of spatial audio format(s) to speaker signals
US11363402B2 (en) 2019-12-30 2022-06-14 Comhear Inc. Method for providing a spatialized soundfield
US11956622B2 (en) 2019-12-30 2024-04-09 Comhear Inc. Method for providing a spatialized soundfield
CN113362805A (en) * 2021-06-18 2021-09-07 四川启睿克科技有限公司 Chinese and English speech synthesis method and device with controllable tone and accent
CN113362805B (en) * 2021-06-18 2022-06-21 四川启睿克科技有限公司 Chinese and English speech synthesis method and device with controllable tone and accent

Also Published As

Publication number Publication date
WO2000019415A3 (en) 2001-03-08
AU6400699A (en) 2000-04-17

Similar Documents

Publication Publication Date Title
US7231054B1 (en) Method and apparatus for three-dimensional audio display
WO2000019415A2 (en) Method and apparatus for three-dimensional audio display
US8374365B2 (en) Spatial audio analysis and synthesis for binaural reproduction and format conversion
CN105340298B (en) The stereo presentation of spherical harmonics coefficient
EP2285139B1 (en) Device and method for converting spatial audio signal
KR100416757B1 (en) Multi-channel audio reproduction apparatus and method for loud-speaker reproduction
US6243476B1 (en) Method and apparatus for producing binaural audio for a moving listener
US8081762B2 (en) Controlling the decoding of binaural audio signals
US8488796B2 (en) 3D audio renderer
KR101567461B1 (en) Apparatus for generating multi-channel sound signal
EP2206364B1 (en) A method for headphone reproduction, a headphone reproduction system, a computer program product
US20150131824A1 (en) Method for high quality efficient 3d sound reproduction
EP3895451B1 (en) Method and apparatus for processing a stereo signal
WO2009046223A2 (en) Spatial audio analysis and synthesis for binaural reproduction and format conversion
US8229143B2 (en) Stereo expansion with binaural modeling
EP2258120A2 (en) Methods and devices for reproducing surround audio signals via headphones
CN101112120A (en) Apparatus and method of processing multi-channel audio input signals to produce at least two channel output signals therefrom, and computer readable medium containing executable code to perform the me
Garí et al. Flexible binaural resynthesis of room impulse responses for augmented reality research
Jot et al. Binaural simulation of complex acoustic scenes for interactive audio
EP2268064A1 (en) Device and method for converting spatial audio signal
US20200059750A1 (en) Sound spatialization method
EP3700233A1 (en) Transfer function generation system and method
Nagel et al. Dynamic binaural cue adaptation
Lopez et al. Elevation in wave-field synthesis using HRTF cues
Neal et al. The impact of head-related impulse response delay treatment strategy on psychoacoustic cue reconstruction errors from virtual loudspeaker arrays

Legal Events

Date Code Title Description
ENP Entry into the national phase in:

Ref country code: AU

Ref document number: 1999 64006

Kind code of ref document: A

Format of ref document f/p: F

AK Designated states

Kind code of ref document: A2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWE Wipo information: entry into national phase

Ref document number: 09806193

Country of ref document: US

122 Ep: pct application non-entry in european phase