WO2006003891A1 - 音声信号復号化装置及び音声信号符号化装置 - Google Patents
音声信号復号化装置及び音声信号符号化装置 Download PDFInfo
- Publication number
- WO2006003891A1 WO2006003891A1 PCT/JP2005/011842 JP2005011842W WO2006003891A1 WO 2006003891 A1 WO2006003891 A1 WO 2006003891A1 JP 2005011842 W JP2005011842 W JP 2005011842W WO 2006003891 A1 WO2006003891 A1 WO 2006003891A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- audio
- channel
- frequency
- channel signal
- Prior art date
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000002156 mixing Methods 0.000 claims abstract description 41
- 230000008569 process Effects 0.000 claims abstract description 36
- 238000006243 chemical reaction Methods 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 4
- 210000005069 ears Anatomy 0.000 claims description 2
- 238000013139 quantization Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 14
- 238000000926 separation method Methods 0.000 description 11
- 230000001052 transient effect Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000002123 temporal effect Effects 0.000 description 5
- XIJXHOVKJAXCGJ-XLPZGREQSA-N 1-[(2r,4s,5r)-4-hydroxy-5-(hydroxymethyl)oxolan-2-yl]-5-iodopyrimidin-2-one Chemical compound C1[C@H](O)[C@@H](CO)O[C@H]1N1C(=O)N=CC(I)=C1 XIJXHOVKJAXCGJ-XLPZGREQSA-N 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000016507 interphase Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/24—Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
Definitions
- the present invention relates to an encoding device that extracts a binaural cue from an audio signal in a coding process and generates a down-limit signal, and a decoding process that converts the binaural queue to the downmix signal.
- the present invention relates to an audio signal decoding apparatus that decodes a multi-channel audio signal by adding to the above.
- the present invention relates to a binaural cue coding method for converting a multi-channel audio signal into a time frequency (TZF) representation using a QMF (Quadrature Mirror Filter) filter bank in the encoding process.
- ZMF time frequency
- QMF Quadrature Mirror Filter
- the present invention relates to the coding and decoding of multi-channel audio signals.
- the main object of the present invention is to perform coding of a digital audio signal while maintaining the perceptual quality of the digital audio signal to the maximum even when the bit rate is limited. Lowering the bit rate is advantageous in reducing the transmission bandwidth and storage capacity.
- the stereo channel L and R forces are expressed in the form of their “sum” (L + R) and “difference” (LR) channels.
- the “difference” signal includes less information than the “sum” signal, coarse bits, and low importance information that can be quantized.
- Binaural Queue Coding In this method, a binaural cue is generated in order to form a downmix signal in the decoding process.
- Binaural queue is an example For example, inter-channel level Z intensity difference (ILD), inter-channel phase Z delay difference (IPD), and inter-channel coherence Z correlation (ICC).
- ILD inter-channel level Z intensity difference
- IPD inter-channel phase Z delay difference
- ICC inter-channel coherence Z correlation
- the relative signal power can be measured from the ILD cue
- the time difference until the sound reaches both ears can be measured from the IPD cue
- the similarity can be measured from the ICC cue.
- level Z intensity cues and phase Z delay cues can control voice balance and localization
- coherent z correlation cues can control voice width and spread. Together, these cues are spatial parameters that help listeners compose an acoustic scene in their heads.
- FIG. 1 is a diagram illustrating a configuration of a typical encoding and decoding codec using a coding method and a decoding method using binaural cue codes.
- the binaural cue extraction module (502) processes L, R, and M to generate a binaural cue.
- the binaural cue extraction module (502) usually includes a time-frequency conversion module.
- the time-to-frequency conversion module converts L, R, and M to a full spectral representation such as FFT, MDCT, etc., or a mixed representation of time and frequency such as QMF.
- L and R forces and M can be generated after spectral conversion by taking the average of the spectrally represented L and R.
- the binaural cue can be obtained by comparing L, R, and M expressed as described above for each spectrum band on the spectrum band.
- the speech encoder (504) encodes the M signal and generates a compressed bit stream.
- Examples of speech encoders include MP3 and AAC encoders.
- the binaural cue is quantized in (5 06) and then multiplexed into the compressed M to form a complete bitstream.
- the demultiplexer (508) also separates the M bitstream from the binaural cue information power.
- the audio decoder (510) decodes the M bitstream and restores the downmix signal M.
- the multi-channel synthesis module (512) processes the down-mixed signal and the dequantized neural cue to restore the multi-channel signal.
- Literature related to the prior art includes the following: Non-Patent Document 1: [l] ISO / IEC 14496-3: 2001 / FDAM2, "Parametric Coding for high Quality Audio"
- Patent Document 1 [2] WO03 / 007656Al, "Efficient and Scalable Parametric Stereo Coding f or Low Bitrate Application
- Patent Document 2 [3] WO03 / 090208Al, "Parametric Representation of Spatial Audio”
- Patent Document 3 [4] US6252965B1, "Multichannel Spectral Mapping Audio Apparatus an d Method"
- Patent Document 4 [5] US2003 / 0219130A1, "Coherence-based Audio Coding and Synthesi s"
- Patent Document 5 [6] US2003 / 0035553A1, "Backwards-Compatible Perceptual Coding of Spatial Cues"
- Patent Document 6 [7] US2003 / 0235317A1, "Equalization For Audio Mixing"
- Patent Document 7 [8] US2003 / 0236583A1, "Hybrid Multi-channel / Cue Coding / Decoding of Audio Signals
- Non-Patent Document 1 Sound expansion is realized by mixing a down-mix signal and a reverberation signal.
- the reverberation signal is obtained by processing the downmix signal using Shroeder's all-pass link.
- the coefficients of this filter are all determined in the decoding process. If the audio signal contains fast-changing features, a separate transient attenuation process is applied to the reverberant signal to suppress the spread of the reverberant in order to remove excessive echo effects. However, if a separate filtering process is performed in this way, an additional computational load is generated.
- FIG. 2 is a diagram illustrating a conventional standard time segment dividing method.
- the method of the prior art [1] uses L, R and M expressed as TZF (" This method makes full use of the psychoacoustic characteristics of the ear. That's not true.
- An object of the present invention is to improve a method based on binaural cue code in the prior art.
- Embodiment 1 of the present invention it is proposed to directly control the reverberation spread by changing the filter coefficient that affects the reverberation spread. Furthermore, it is proposed that these filter coefficients be controlled by the I CC queue and transient detection module.
- the TZF representation is divided into a plurality of ⁇ sections ⁇ in the spectrum direction.
- the maximum allowable number of temporal boundaries is different for each section so that the allowable number of temporal boundaries is reduced for sections belonging to the high frequency region. In this way, signal subdivision in the low frequency region can be performed more precisely, and level adjustment can be performed more accurately while suppressing sudden changes in the bit rate.
- Embodiment 3 proposes that the crossover frequency is changed in accordance with the bit rate.
- the original sound code is predicted to be coarse due to bit rate restrictions, it is proposed to mix the original sound signal and the downmix signal at a low frequency.
- FIG. 1 is a diagram showing a configuration of a conventional typical binaural cue code system.
- FIG. 2 is a diagram illustrating a typical conventional time division method for various frequency sections.
- FIG. 3 is a block diagram showing a configuration of a coding apparatus according to the present invention.
- FIG. 4 is a diagram showing a temporal division method for various frequency sections.
- FIG. 5 is a block diagram showing a configuration of a decoding device according to Embodiment 1 of the present invention.
- FIG. 6 is a block diagram showing a configuration of a decryption apparatus according to Embodiment 3 of the present invention.
- Fig. 7 is a block diagram showing a configuration of a sign key system according to Embodiment 3 of the present invention.
- the power shown here is an example of stereo-mono.
- the present invention is not limited to this. This can be generalized as M original channels and N downmix channels.
- FIG. 3 is a block diagram showing a configuration of the coding apparatus according to the first embodiment.
- Fig. 3 shows the encoding process according to the present invention.
- the encoding apparatus according to the present embodiment includes a conversion module 100, a downmix module 102, two energy envelope analyzers 104 for L (t, f) and R (t, f), and a left channel channel.
- a module 106 for calculating the interphase phase queue IPDL (b), a module 108 for calculating the IPDR (b) of the right channel, and a module 110 for calculating ICC (b) are provided.
- the conversion module (100) processes the original channel, denoted below as a function of time L (t) and R (t).
- the conversion module (100) is, for example, a complex QMF filter bank as used in MPEG Audio Extensions 1 and 2.
- L (t, f) and R (t, f) include a plurality of continuous subbands, and each subband represents a narrow frequency band of the original signal.
- the QMF filter bank passes a narrow frequency band for low frequency subbands and passes a wide band for high frequency subbands. Page.
- the downmix module (102) processes L (t, f) and R (t, f) to generate a downmix signal M (t, f).
- L (t, f) and R (t, f) processes L (t, f) and R (t, f) to generate a downmix signal M (t, f).
- M downmix signal
- level adjustment is performed using an energy cue instead of an ILD cue.
- FIG. 4 is a diagram showing how to divide the time-frequency section for adjusting the energy envelope of the audio channel signal after mixing.
- the time-frequency representation L (t, f) is divided into a plurality of bands (400) in the frequency direction. Each band includes a plurality of subbands.
- the low frequency band has fewer subbands than the high frequency band. For example, when grouping subbands into bands, the “Burk scale” or the “critical band”, well known in the field of psychoacoustics, can be used.
- L (t, f) is further divided into frequency bands (1, b) by BorderL in the time direction, and EL (1, b) is calculated for this.
- 1 is an index of time division
- b is an index of bandwidth.
- the optimal location of BorderL is the temporal position where the energy change of L (t, f) is large and the energy change of the signal formed by decoding processing is expected to be large.
- EL (1, b) is used to shape the energy envelope of the downmix signal for each band, and the boundary is determined by the same critical band boundary and BorderL.
- the energy EL (1, b) is defined as follows.
- the right channel energy envelope analysis module (104) processes R (t, f) and generates ER (1, b) and BorderR.
- the inter-left channel phase cue calculation module (106) processes L (t, f) and M (t, f), and uses the following equation: Find IPDL (b).
- M * (t, f) represents a complex conjugate of M (t, f).
- the right channel phase cue calculation module (108) obtains the right channel inter channel phase queue IPDR (b).
- the module (110) processes L (t, f) and R (t, f) in order to obtain the interchannel coherence queue between the left channel and the right channel.
- ICC (b) is calculated using the following formula.
- All of the binaural cues become a part of the sub information in the code processing.
- FIG. 5 is a block diagram showing a configuration of the decoding device according to the first embodiment.
- the decoding apparatus according to the first embodiment includes a conversion module (200), a reverberation generator (202), a transient detector (204), a phase adjuster (206, 208), a mixer 2 (210, 212), an energy adjustment Device (214, 216) and an inverse conversion module (218).
- Figure 5 shows the binaola generated as described above. A possible decryption process using a queue is shown.
- the conversion module (200) processes the down-limit signal M (t) and converts it into a time-frequency representation M (t, f).
- the conversion module (200) shown in the present embodiment is a complex QMF filter bank.
- the reverberation generator (202) processes M (t, f) and generates a "diffusion version" of M (t, f) called MD (t, f).
- MD a "diffusion version” of M (t, f)
- This diffuse version creates a more “stereo” impression (in the case of multi-channel, a “surround” impression) by inserting “echo” into M (t, f).
- fractional delay all-pass filtering is used to obtain the reverberation effect.
- a cascade system of multiple all-pass filters (known as Schroeder's all-pass link) is used.
- the control method of the reverberation attenuation rate in the prior art is not optimal for all signal characteristics. For example, if the signal is a fast-changing signal “spike-wave signal”, it is desirable that the reverberation is low in order to avoid excessive echo effects. Separately, reverberation is suppressed to some extent.
- the slope (f, m) parameter is adaptively controlled using an ICC queue.
- new_slope (f, m) is used instead of slope (f, m) as follows.
- new-slope (f, m) is defined as the output function of the transient detection module (204).
- ICC (b) is defined as follows:
- the transient detection module (204) uses a small Tr — flag such as 0.1 to reduce slope (f, m) Returns (b). As a result, reverberation can be reduced. On the other hand, in the case of a smoothly changing signal, the transient detection module (204) returns a large Tr flag (b) value such as 0.99. As a result, a desired amount of reverberation can be maintained.
- Tr—flag (b) can be generated by analyzing M (t, f) in the decoding process. Alternatively, Tr-flag (b) can be generated in the encoding process and transmitted to the decoding process side as sub information.
- the reverberation signal MD (t, f) expressed in the z domain is generated by convolving M (t, f) with Hf (z). (Convolution is multiplication in the z domain).
- Lreverb (t, f) and Rreverb (t, f) are phase queues IPDL (b) and IPDR (b), respectively; for phase adjustment modules (206) and (208)! /, Generated by appending to MD (t, f). By performing this process, the phase relationship between the original sound and the downmix signal in the encoding process can be recovered.
- phase added here can be interpolated before adding them using the phase of previously processed speech frames.
- Lreverb t, f
- the formula in the left channel phase adjustment module (208) is changed as follows.
- the right channel phase adjustment module (206) also performs interpolation in the same manner, and generates Rr everb (t, f) from MD (t, f) force.
- Lreverb (t, f) and Rreverb (t, f) are left channel energy adjustment modules (214) And molded in the right channel energy adjustment module (216).
- the shaping is done to resemble energy envelopes in BorderL, BorderR, and energy envelope forces in the various bands delimited by the boundaries of a given frequency section (as shown in Figure 4).
- the gain coefficient GL (1, b) is calculated for the band (1, b) as follows.
- Lreverb (t, f) is multiplied by a gain coefficient for all samples in the band.
- the right channel energy adjustment module (216) performs the same processing for the right channel.
- Lreverb (t, f) and Rreverb (t, f) are only artificial reverberation signals, so in some cases, it may not be optimal to use them as multichannel signals as they are. There is.
- the parameter slope (f, m) is adjusted to match ne w—slope (f, m), but this does not change the principal component of the echo determined by the order of the all-pass filter. Can not.
- Lreverb (t, f) and Rreverb (t, f) in the left channel mixer 2 (210) and the right channel mixer 2 (212), which are mixing modules By mixing with the downmix signal M (t, f), an option to expand the range of control is provided.
- the ratio of the reverberation signals Lreverb (t, f) and Rreverb (t, f) to the downmix signal M (t, f) depends on ICC (b), for example: Can be controlled.
- L reverl , (t, f) (l-ICC (b)) * L reveri (t, f) + ICC (b) * M (t, f)
- ICC (b) shows the correlation between the left channel and the right channel.
- M (t, f) is mixed into Lreverb (t, f) and Rreverb (t, f) more. The same applies to the reverse case.
- the module (218) inversely transforms the energy-adjusted Ladj (t, f) and Radj (t, f) to generate a signal on the time axis.
- reverse QMF processing is used.
- the second embodiment relates to the energy envelope analysis module (104) shown in FIG.
- the psychoacoustic characteristics of the ear cannot be used. Therefore, in the present embodiment, as shown in FIG. 4, by using the characteristic of the ear that the sensitivity is low with respect to the high frequency sound, the low frequency is divided finely to obtain a high frequency. On the other hand, the accuracy of division is lowered.
- the frequency band of L (t, f) is further divided into "sections" (402).
- Figure 4 shows three sections from Section 0 (402) to Section 2 (404).
- a high frequency section can have, for example, at most one boundary (404), which divides the frequency section into two.
- splitting in the highest frequency section is not allowed.
- this section uses the famous “Intensity Stereo” used in the prior art. The accuracy of the segmentation increases as the force goes to the lower section where the ear sensitivity is higher.
- Part of the sub information may be a section boundary, or may be determined in advance according to the bit rate of the sign. However, the temporal boundary (406) of each section is a part of the sub information Border L.
- the first boundary of the target frame does not have to be the start boundary of the frame. Two consecutive frames may share the same energy envelope across multiple frame boundaries. In this case, two audio frames need to be buffered to enable this process.
- FIG. 6 is a block diagram showing a configuration of the decoding device according to the third embodiment.
- the portion surrounded by a broken line indicates Lreverb and Rreverb for adjusting the phase of the premixing channel signal obtained by premixing in mixer 1 (322, 324) in the reverberation generator (302).
- This is a signal separation unit that separates downmix signal power.
- This decoding apparatus includes the signal separation unit, the conversion module (300), the mixer 1 (322, 324), the low-pass filter (320), the mixer 2 (310, 312), the energy regulator (314, 316). ) And an inverse transformation module (318).
- the coarsely quantized multi-channel signal and the reverberation signal in the low frequency region are mixed. The reason why rough quantization is performed is that the bit rate is limited.
- the coarsely quantized Llf (t) and Rlf (t) are time-frequency converted together with the downmix signal M (t) in the conversion module (300), which is a QMF filter bank, They are expressed as Llf (t, f) and Rlf (t, f), respectively.
- the left mixer 1 (322) and the right mixer 1 (324), which are premixing modules each have a right channel Rlf (t, Premix f) and left channel Llf (t, f) into downmix signal M (t, f).
- premixing channel signals LM (t, f) and RM (t, f) are generated.
- premixing is performed as follows.
- the difference signal is calculated for Llf (t) and Rlf (t), and only the main frequency components up to fx determined according to the psychoacoustic model are coded.
- a predetermined quantization step can be employed.
- each channel signal after separation may be subtracted.
- Llf (t) L (t) -Lreverb (t)
- Llf (t), Rlf (t ) May be added to correct the signal shift.
- Llf (t) and Rlf (t) cannot be mixed because the number of bits for quantizing is insufficient.
- fx is zero.
- binaural cue code ⁇ is performed only for the higher frequency range than &.
- FIG. 7 is a block diagram showing a configuration of a code key system including the code key device and the decoding key device according to the third embodiment.
- the encoding system of Embodiment 3 includes a down-mix unit (410), an AAC encoder (411), a binaural cue encoder (412), and a second encoder (413) on the code side, and on the decoding side.
- An AAC decoder (414), a premittance unit (415), a signal separation unit (416), and a mixing unit (417) are provided.
- the signal separation unit (416) includes a channel separation unit (418) and a phase adjustment unit (419).
- the downmix unit (410) is the same as the downmix unit (102) shown in FIG. 1, for example.
- the down-mix signal M (t) generated in this way is subjected to MDCT (Modified Describe Cosine Transform) conversion in the AAC encoder (411), quantized for each subband, variable-length encoded, and encoded. Embedded in the bitstream.
- MDCT Modified Describe Cosine Transform
- the binaural cue encoder (412) converts the audio channels L (t), R (t), and M (t) into a time-frequency representation by QMF and compares the channel signals. Calculate the binaural cue.
- the normal cue encoder (412) encodes the calculated normal cue and multiplexes it into a code string.
- the second encoder (413) is provided with a right channel signal R as shown in, for example, Formula 15.
- Difference signals Llf (t) and Rlf (t) between (t) and left channel signal L (t) and downmix signal M (t) are calculated, coarsely quantized, and encoded.
- the second encoder (413) does not necessarily have to encode in the same encoding format as the AAC encoder (411)!
- the AAC decoder (414) decodes the downmix signal encoded by the AAC method and converts the decoded downmix signal into a time-frequency representation M (t, f) by QMF. .
- the signal separation unit (416) includes a channel separation unit (418) and a phase adjustment unit (419).
- the channel separation unit (418) is a bar code encoded by the binaural cue encoder (412). After decoding the initial cue parameter and the differential signals Llf (t) and Rlf (t) encoded by the second encoder (413), the differential signals Llf (t) and Rlf (t) are equalized in time. Convert to frequency representation. Thereafter, the channel separation unit (418), for example, in accordance with ICC (b), the downmix signal M (t, f) that is the output of the AAC decoder (414) and the differential signal Llf converted into a time-frequency representation. (t, f) and Rlf (t, f) are premixed, and premixing channel signals LM and RM generated thereby are output to the mixing unit 417.
- the phase adjustment unit (419) performs generation and addition of necessary reverberation components to the downmix signal M (t, f), adjusts the phase, and mixes it as the phase adjustment signals Lrev and Rrev. Part (4 17).
- the mixing unit (417) mixes the pre-mixing channel signal LM and the phase adjustment signal Lrev to the left channel! And inverse QMFs the obtained mixing signal to obtain a function of time.
- the output signal represented by! Is output.
- the premixing channel signal RM and the phase adjustment signal Rrev are mixed, the obtained mixing signal is inversely QMFed, and an output signal R "expressed as a function of time is output.
- the left and right differential signals Llf (t) and Rlf (t) are phase-adjusted with the original audio channel signals L (t) and R (t).
- the present invention can be applied to a home theater system, a car audio system, an electronic game system, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Stereophonic System (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006528708A JP4934427B2 (ja) | 2004-07-02 | 2005-06-28 | 音声信号復号化装置及び音声信号符号化装置 |
CA2572805A CA2572805C (en) | 2004-07-02 | 2005-06-28 | Audio signal decoding device and audio signal encoding device |
KR1020067024727A KR101120911B1 (ko) | 2004-07-02 | 2005-06-28 | 음성신호 복호화 장치 및 음성신호 부호화 장치 |
US11/629,135 US7756713B2 (en) | 2004-07-02 | 2005-06-28 | Audio signal decoding device which decodes a downmix channel signal and audio signal encoding device which encodes audio channel signals together with spatial audio information |
CN2005800226670A CN1981326B (zh) | 2004-07-02 | 2005-06-28 | 音频信号解码装置和方法及音频信号编码装置和方法 |
EP05765247.1A EP1768107B1 (en) | 2004-07-02 | 2005-06-28 | Audio signal decoding device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-197336 | 2004-07-02 | ||
JP2004197336 | 2004-07-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006003891A1 true WO2006003891A1 (ja) | 2006-01-12 |
Family
ID=35782698
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/011842 WO2006003891A1 (ja) | 2004-07-02 | 2005-06-28 | 音声信号復号化装置及び音声信号符号化装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US7756713B2 (ja) |
EP (1) | EP1768107B1 (ja) |
JP (1) | JP4934427B2 (ja) |
KR (1) | KR101120911B1 (ja) |
CN (1) | CN1981326B (ja) |
CA (1) | CA2572805C (ja) |
WO (1) | WO2006003891A1 (ja) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2048658A1 (en) * | 2006-08-04 | 2009-04-15 | Panasonic Corporation | Stereo audio encoding device, stereo audio decoding device, and method thereof |
KR101111520B1 (ko) | 2006-12-07 | 2012-05-24 | 엘지전자 주식회사 | 오디오 처리 방법 및 장치 |
JP2012181556A (ja) * | 2005-09-13 | 2012-09-20 | Koninkl Philips Electronics Nv | オーディオ符号化 |
US8374882B2 (en) | 2008-12-11 | 2013-02-12 | Fujitsu Limited | Parametric stereophonic audio decoding for coefficient correction by distortion detection |
US8504376B2 (en) | 2006-09-29 | 2013-08-06 | Lg Electronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
JP2013545128A (ja) * | 2010-10-13 | 2013-12-19 | サムスン エレクトロニクス カンパニー リミテッド | 多チャネルオーディオ信号をダウンミックスする方法及び装置 |
WO2014068817A1 (ja) * | 2012-10-31 | 2014-05-08 | パナソニック株式会社 | オーディオ信号符号化装置及びオーディオ信号復号装置 |
JP2017078858A (ja) * | 2013-04-05 | 2017-04-27 | ドルビー・インターナショナル・アーベー | 信号をインタリーブするためのオーディオ復号器 |
JP2021047432A (ja) * | 2017-03-31 | 2021-03-25 | 華為技術有限公司Huawei Technologies Co.,Ltd. | マルチチャネル信号符号化方法、マルチチャネル信号復号化方法、符号器、及び復号器 |
JP2021121853A (ja) * | 2017-04-12 | 2021-08-26 | 華為技術有限公司Huawei Technologies Co., Ltd. | マルチチャネル信号符号化方法、マルチチャネル信号復号方法、エンコーダ、およびデコーダ |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008517317A (ja) * | 2004-10-15 | 2008-05-22 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | オーディオデータ処理システム、方法、プログラム要素、及びコンピュータ読み取り可能媒体 |
US8768691B2 (en) * | 2005-03-25 | 2014-07-01 | Panasonic Corporation | Sound encoding device and sound encoding method |
JP2009500656A (ja) | 2005-06-30 | 2009-01-08 | エルジー エレクトロニクス インコーポレイティド | オーディオ信号をエンコーディング及びデコーディングするための装置とその方法 |
WO2007004830A1 (en) | 2005-06-30 | 2007-01-11 | Lg Electronics Inc. | Apparatus for encoding and decoding audio signal and method thereof |
US8019614B2 (en) * | 2005-09-02 | 2011-09-13 | Panasonic Corporation | Energy shaping apparatus and energy shaping method |
WO2008039038A1 (en) * | 2006-09-29 | 2008-04-03 | Electronics And Telecommunications Research Institute | Apparatus and method for coding and decoding multi-object audio signal with various channel |
JP2010516077A (ja) * | 2007-01-05 | 2010-05-13 | エルジー エレクトロニクス インコーポレイティド | オーディオ信号処理方法及び装置 |
BRPI0923174B1 (pt) | 2008-12-19 | 2020-10-06 | Dolby International Ab | Método e reverberador para aplicar reverberação a um sinal de entrada de áudio com downmixing de mcanais |
US8666752B2 (en) | 2009-03-18 | 2014-03-04 | Samsung Electronics Co., Ltd. | Apparatus and method for encoding and decoding multi-channel signal |
EP2704143B1 (en) | 2009-10-21 | 2015-01-07 | Panasonic Intellectual Property Corporation of America | Apparatus, method and computer program for audio signal processing |
US12002476B2 (en) | 2010-07-19 | 2024-06-04 | Dolby International Ab | Processing of audio signals during high frequency reconstruction |
KR101445291B1 (ko) * | 2010-08-25 | 2014-09-29 | 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. | 결합 유닛 및 믹서를 사용하여 트랜지언트들을 포함하는 신호를 디코딩하기 위한 장치 |
US8908874B2 (en) * | 2010-09-08 | 2014-12-09 | Dts, Inc. | Spatial audio encoding and reproduction |
FR2966634A1 (fr) * | 2010-10-22 | 2012-04-27 | France Telecom | Codage/decodage parametrique stereo ameliore pour les canaux en opposition de phase |
TWI462087B (zh) | 2010-11-12 | 2014-11-21 | Dolby Lab Licensing Corp | 複數音頻信號之降混方法、編解碼方法及混合系統 |
KR101842257B1 (ko) * | 2011-09-14 | 2018-05-15 | 삼성전자주식회사 | 신호 처리 방법, 그에 따른 엔코딩 장치, 및 그에 따른 디코딩 장치 |
CN102446507B (zh) * | 2011-09-27 | 2013-04-17 | 华为技术有限公司 | 一种下混信号生成、还原的方法和装置 |
US9161149B2 (en) | 2012-05-24 | 2015-10-13 | Qualcomm Incorporated | Three-dimensional sound compression and over-the-air transmission during a call |
US9190065B2 (en) | 2012-07-15 | 2015-11-17 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for three-dimensional audio coding using basis function coefficients |
US9516446B2 (en) | 2012-07-20 | 2016-12-06 | Qualcomm Incorporated | Scalable downmix design for object-based surround codec with cluster analysis by synthesis |
US9761229B2 (en) | 2012-07-20 | 2017-09-12 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for audio object clustering |
JP2014074782A (ja) * | 2012-10-03 | 2014-04-24 | Sony Corp | 音声送信装置、音声送信方法、音声受信装置および音声受信方法 |
WO2014058138A1 (ko) * | 2012-10-12 | 2014-04-17 | 한국전자통신연구원 | 객체 오디오 신호의 잔향 신호를 이용한 오디오 부/복호화 장치 |
KR20140047509A (ko) | 2012-10-12 | 2014-04-22 | 한국전자통신연구원 | 객체 오디오 신호의 잔향 신호를 이용한 오디오 부/복호화 장치 |
US8804971B1 (en) | 2013-04-30 | 2014-08-12 | Dolby International Ab | Hybrid encoding of higher frequency and downmixed low frequency content of multichannel audio |
EP2804176A1 (en) * | 2013-05-13 | 2014-11-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Audio object separation from mixture signal using object-specific time/frequency resolutions |
CN105229731B (zh) | 2013-05-24 | 2017-03-15 | 杜比国际公司 | 根据下混的音频场景的重构 |
US10026408B2 (en) | 2013-05-24 | 2018-07-17 | Dolby International Ab | Coding of audio scenes |
EP2830056A1 (en) | 2013-07-22 | 2015-01-28 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for encoding or decoding an audio signal with intelligent gap filling in the spectral domain |
EP2840811A1 (en) * | 2013-07-22 | 2015-02-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method for processing an audio signal; signal processing unit, binaural renderer, audio encoder and audio decoder |
WO2015012594A1 (ko) * | 2013-07-23 | 2015-01-29 | 한국전자통신연구원 | 잔향 신호를 이용한 다채널 오디오 신호의 디코딩 방법 및 디코더 |
CN108347689B (zh) * | 2013-10-22 | 2021-01-01 | 延世大学工业学术合作社 | 用于处理音频信号的方法和设备 |
CN104768121A (zh) | 2014-01-03 | 2015-07-08 | 杜比实验室特许公司 | 响应于多通道音频通过使用至少一个反馈延迟网络产生双耳音频 |
WO2016142002A1 (en) | 2015-03-09 | 2016-09-15 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio signal and method for decoding an encoded audio signal |
US10109284B2 (en) * | 2016-02-12 | 2018-10-23 | Qualcomm Incorporated | Inter-channel encoding and decoding of multiple high-band audio signals |
CN108269577B (zh) * | 2016-12-30 | 2019-10-22 | 华为技术有限公司 | 立体声编码方法及立体声编码器 |
EP4398243A3 (en) * | 2019-06-14 | 2024-10-09 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Parameter encoding and decoding |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06105824A (ja) * | 1992-09-28 | 1994-04-19 | Toshiba Corp | 磁気共鳴信号の処理装置およびその処理方法 |
JPH09102742A (ja) * | 1995-10-05 | 1997-04-15 | Sony Corp | 符号化方法および装置、復号化方法および装置、並びに記録媒体 |
JPH09507734A (ja) * | 1994-01-04 | 1997-08-05 | モトローラ・インコーポレイテッド | 広帯域および狭帯域無線通信を同時に行うための方法および装置 |
JP2003522439A (ja) * | 1999-06-15 | 2003-07-22 | ヒアリング エンハンスメント カンパニー,リミティド ライアビリティー カンパニー | 音声対残留オーディオ(vra)相互作用式補聴装置および補助設備 |
WO2003090207A1 (en) * | 2002-04-22 | 2003-10-30 | Koninklijke Philips Electronics N.V. | Parametric multi-channel audio representation |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09102472A (ja) * | 1995-10-06 | 1997-04-15 | Matsushita Electric Ind Co Ltd | 誘電体素子の製造方法 |
US6252965B1 (en) * | 1996-09-19 | 2001-06-26 | Terry D. Beard | Multichannel spectral mapping audio apparatus and method |
DE19721487A1 (de) * | 1997-05-23 | 1998-11-26 | Thomson Brandt Gmbh | Verfahren und Vorrichtung zur Fehlerverschleierung bei Mehrkanaltonsignalen |
JP3352406B2 (ja) * | 1998-09-17 | 2002-12-03 | 松下電器産業株式会社 | オーディオ信号の符号化及び復号方法及び装置 |
US20030035553A1 (en) * | 2001-08-10 | 2003-02-20 | Frank Baumgarte | Backwards-compatible perceptual coding of spatial cues |
US7292901B2 (en) * | 2002-06-24 | 2007-11-06 | Agere Systems Inc. | Hybrid multi-channel/cue coding/decoding of audio signals |
US7006636B2 (en) * | 2002-05-24 | 2006-02-28 | Agere Systems Inc. | Coherence-based audio coding and synthesis |
SE0202159D0 (sv) | 2001-07-10 | 2002-07-09 | Coding Technologies Sweden Ab | Efficientand scalable parametric stereo coding for low bitrate applications |
CN1312660C (zh) * | 2002-04-22 | 2007-04-25 | 皇家飞利浦电子股份有限公司 | 信号合成方法和设备 |
DE60326782D1 (de) * | 2002-04-22 | 2009-04-30 | Koninkl Philips Electronics Nv | Dekodiervorrichtung mit Dekorreliereinheit |
US7039204B2 (en) * | 2002-06-24 | 2006-05-02 | Agere Systems Inc. | Equalization for audio mixing |
US7502743B2 (en) * | 2002-09-04 | 2009-03-10 | Microsoft Corporation | Multi-channel audio encoding and decoding with multi-channel transform selection |
US7299190B2 (en) * | 2002-09-04 | 2007-11-20 | Microsoft Corporation | Quantization and inverse quantization for audio |
-
2005
- 2005-06-28 JP JP2006528708A patent/JP4934427B2/ja active Active
- 2005-06-28 WO PCT/JP2005/011842 patent/WO2006003891A1/ja active Application Filing
- 2005-06-28 US US11/629,135 patent/US7756713B2/en active Active
- 2005-06-28 CN CN2005800226670A patent/CN1981326B/zh active Active
- 2005-06-28 CA CA2572805A patent/CA2572805C/en active Active
- 2005-06-28 EP EP05765247.1A patent/EP1768107B1/en active Active
- 2005-06-28 KR KR1020067024727A patent/KR101120911B1/ko active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06105824A (ja) * | 1992-09-28 | 1994-04-19 | Toshiba Corp | 磁気共鳴信号の処理装置およびその処理方法 |
JPH09507734A (ja) * | 1994-01-04 | 1997-08-05 | モトローラ・インコーポレイテッド | 広帯域および狭帯域無線通信を同時に行うための方法および装置 |
JPH09102742A (ja) * | 1995-10-05 | 1997-04-15 | Sony Corp | 符号化方法および装置、復号化方法および装置、並びに記録媒体 |
JP2003522439A (ja) * | 1999-06-15 | 2003-07-22 | ヒアリング エンハンスメント カンパニー,リミティド ライアビリティー カンパニー | 音声対残留オーディオ(vra)相互作用式補聴装置および補助設備 |
WO2003090207A1 (en) * | 2002-04-22 | 2003-10-30 | Koninklijke Philips Electronics N.V. | Parametric multi-channel audio representation |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012181556A (ja) * | 2005-09-13 | 2012-09-20 | Koninkl Philips Electronics Nv | オーディオ符号化 |
EP2048658A1 (en) * | 2006-08-04 | 2009-04-15 | Panasonic Corporation | Stereo audio encoding device, stereo audio decoding device, and method thereof |
EP2048658A4 (en) * | 2006-08-04 | 2012-07-11 | Panasonic Corp | STEREOAUDIO CODING DEVICE, STEREOAUDIO DECODING DEVICE AND METHOD THEREFOR |
US9792918B2 (en) | 2006-09-29 | 2017-10-17 | Lg Electronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
US9384742B2 (en) | 2006-09-29 | 2016-07-05 | Lg Electronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
US8762157B2 (en) | 2006-09-29 | 2014-06-24 | Lg Electronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
US8625808B2 (en) | 2006-09-29 | 2014-01-07 | Lg Elecronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
US8504376B2 (en) | 2006-09-29 | 2013-08-06 | Lg Electronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
US8488797B2 (en) | 2006-12-07 | 2013-07-16 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US8428267B2 (en) | 2006-12-07 | 2013-04-23 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US8340325B2 (en) | 2006-12-07 | 2012-12-25 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US8311227B2 (en) | 2006-12-07 | 2012-11-13 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
KR101111520B1 (ko) | 2006-12-07 | 2012-05-24 | 엘지전자 주식회사 | 오디오 처리 방법 및 장치 |
US8374882B2 (en) | 2008-12-11 | 2013-02-12 | Fujitsu Limited | Parametric stereophonic audio decoding for coefficient correction by distortion detection |
JP2013545128A (ja) * | 2010-10-13 | 2013-12-19 | サムスン エレクトロニクス カンパニー リミテッド | 多チャネルオーディオ信号をダウンミックスする方法及び装置 |
WO2014068817A1 (ja) * | 2012-10-31 | 2014-05-08 | パナソニック株式会社 | オーディオ信号符号化装置及びオーディオ信号復号装置 |
JPWO2014068817A1 (ja) * | 2012-10-31 | 2016-09-08 | 株式会社ソシオネクスト | オーディオ信号符号化装置及びオーディオ信号復号装置 |
JP2017078858A (ja) * | 2013-04-05 | 2017-04-27 | ドルビー・インターナショナル・アーベー | 信号をインタリーブするためのオーディオ復号器 |
US11830510B2 (en) | 2013-04-05 | 2023-11-28 | Dolby International Ab | Audio decoder for interleaving signals |
JP2019191596A (ja) * | 2013-04-05 | 2019-10-31 | ドルビー・インターナショナル・アーベー | 信号をインタリーブするためのオーディオ復号器 |
US10438602B2 (en) | 2013-04-05 | 2019-10-08 | Dolby International Ab | Audio decoder for interleaving signals |
US11114107B2 (en) | 2013-04-05 | 2021-09-07 | Dolby International Ab | Audio decoder for interleaving signals |
JP7035154B2 (ja) | 2017-03-31 | 2022-03-14 | 華為技術有限公司 | マルチチャネル信号符号化方法、マルチチャネル信号復号化方法、符号器、及び復号器 |
JP2022084671A (ja) * | 2017-03-31 | 2022-06-07 | 華為技術有限公司 | マルチチャネル信号符号化方法、マルチチャネル信号復号化方法、符号器、及び復号器 |
US11386907B2 (en) | 2017-03-31 | 2022-07-12 | Huawei Technologies Co., Ltd. | Multi-channel signal encoding method, multi-channel signal decoding method, encoder, and decoder |
JP2021047432A (ja) * | 2017-03-31 | 2021-03-25 | 華為技術有限公司Huawei Technologies Co.,Ltd. | マルチチャネル信号符号化方法、マルチチャネル信号復号化方法、符号器、及び復号器 |
US11894001B2 (en) | 2017-03-31 | 2024-02-06 | Huawei Technologies Co., Ltd. | Multi-channel signal encoding method, multi-channel signal decoding method, encoder, and decoder |
JP7436541B2 (ja) | 2017-03-31 | 2024-02-21 | 華為技術有限公司 | マルチチャネル信号符号化方法、コンピュータ可読記憶媒体、コンピュータプログラム、及び符号器 |
US12154578B2 (en) | 2017-03-31 | 2024-11-26 | Huawei Technologies Co., Ltd. | Multi-channel signal encoding method, multi-channel signal decoding method, encoder, and decoder |
JP2021121853A (ja) * | 2017-04-12 | 2021-08-26 | 華為技術有限公司Huawei Technologies Co., Ltd. | マルチチャネル信号符号化方法、マルチチャネル信号復号方法、エンコーダ、およびデコーダ |
JP7106711B2 (ja) | 2017-04-12 | 2022-07-26 | 華為技術有限公司 | マルチチャネル信号符号化方法、マルチチャネル信号復号方法、エンコーダ、およびデコーダ |
JP2022160440A (ja) * | 2017-04-12 | 2022-10-19 | 華為技術有限公司 | マルチチャネル信号符号化方法、マルチチャネル信号復号方法、エンコーダ、およびデコーダ |
JP7379602B2 (ja) | 2017-04-12 | 2023-11-14 | 華為技術有限公司 | マルチチャネル信号符号化方法、マルチチャネル信号復号方法、エンコーダ、およびデコーダ |
US11832087B2 (en) | 2017-04-12 | 2023-11-28 | Huawei Technologies Co., Ltd. | Multi-channel signal encoding method, multi-channel signal decoding method, encoder, and decoder |
Also Published As
Publication number | Publication date |
---|---|
EP1768107A1 (en) | 2007-03-28 |
CN1981326B (zh) | 2011-05-04 |
EP1768107A4 (en) | 2009-10-21 |
JPWO2006003891A1 (ja) | 2008-04-17 |
KR20070030796A (ko) | 2007-03-16 |
CN1981326A (zh) | 2007-06-13 |
US20080071549A1 (en) | 2008-03-20 |
JP4934427B2 (ja) | 2012-05-16 |
KR101120911B1 (ko) | 2012-02-27 |
US7756713B2 (en) | 2010-07-13 |
CA2572805C (en) | 2013-08-13 |
CA2572805A1 (en) | 2006-01-12 |
EP1768107B1 (en) | 2016-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4934427B2 (ja) | 音声信号復号化装置及び音声信号符号化装置 | |
KR101967122B1 (ko) | 신호 처리 장치 및 방법, 및 프로그램 | |
US7974713B2 (en) | Temporal and spatial shaping of multi-channel audio signals | |
CN103329197B (zh) | 用于反相声道的改进的立体声参数编码/解码 | |
CN101223821B (zh) | 音频解码器 | |
US8817992B2 (en) | Multichannel audio coder and decoder | |
US8019087B2 (en) | Stereo signal generating apparatus and stereo signal generating method | |
RU2495503C2 (ru) | Устройство кодирования звука, устройство декодирования звука, устройство кодирования и декодирования звука и система проведения телеконференций | |
JP5426680B2 (ja) | 信号処理方法及び装置 | |
WO2011013381A1 (ja) | 符号化装置および復号装置 | |
CN102656628B (zh) | 优化的低吞吐量参数编码/解码 | |
US8352249B2 (en) | Encoding device, decoding device, and method thereof | |
US9177569B2 (en) | Apparatus, medium and method to encode and decode high frequency signal | |
WO2006075563A1 (ja) | オーディオ符号化装置、オーディオ符号化方法およびオーディオ符号化プログラム | |
CN105378832A (zh) | 利用对象特定时间/频率分辨率从混合信号分离音频对象 | |
CN104838442A (zh) | 用于反向兼容多重分辨率空间音频对象编码的编码器、译码器及方法 | |
JP4794448B2 (ja) | オーディオエンコーダ | |
US20120035936A1 (en) | Information reuse in low power scalable hybrid audio encoders |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006528708 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020067024727 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11629135 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005765247 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2572805 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580022667.0 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067024727 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2005765247 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11629135 Country of ref document: US |