[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

EP0557940B1 - Speech coding system - Google Patents

Speech coding system Download PDF

Info

Publication number
EP0557940B1
EP0557940B1 EP93102794A EP93102794A EP0557940B1 EP 0557940 B1 EP0557940 B1 EP 0557940B1 EP 93102794 A EP93102794 A EP 93102794A EP 93102794 A EP93102794 A EP 93102794A EP 0557940 B1 EP0557940 B1 EP 0557940B1
Authority
EP
European Patent Office
Prior art keywords
speech
subvector
weighting
filter coefficients
adaptive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP93102794A
Other languages
German (de)
French (fr)
Other versions
EP0557940A2 (en
EP0557940A3 (en
Inventor
Masahiro C/O Nec Corporation Serizawa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Publication of EP0557940A2 publication Critical patent/EP0557940A2/en
Publication of EP0557940A3 publication Critical patent/EP0557940A3/xx
Application granted granted Critical
Publication of EP0557940B1 publication Critical patent/EP0557940B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders

Definitions

  • the present invention relates to a speech coding system for high quality coding of a speech signal at a low bit rate, e.g., 8 K bps or lower.
  • CELP Code Excited LPC Coding
  • CELP Code Excited LPC Coding
  • linear predictive analysis is first performed on the speech (voice) signal in a frame of a predetermined period at the transmission side to obtain the linear predictive coefficient sets.
  • an adaptive codebook are stored a plurality of adaptive codevectors produced by cutting out the previously synthesized sound source signal at predetermined timings.
  • the adaptive codevector having the smallest square distance is searched based upon the perceptual weighted speech vector and perceptual weighted synthesized adaptive codevectors in the adaptive codebook.
  • the searched synthesized adaptive codevector is subtracted from the weighted speech vector to obtain the residual vector.
  • an excitation codebook are stored a plurality of excitation codevectors obtained from, for example, noise signal in a predetermined frame.
  • An excitation codevector having the smallest square distance is searched based upon the perceptual weighted synthesized excitation codevector or residual vector.
  • Optimum gains are calculated for the searched adaptive codevector and excitation codevector. Indexes of the searched adaptive codevector, the excitation codevector, and the gains and the linear predictive coefficient set are transmitted.
  • the speech signal is synthesized based on these indexes.
  • a disadvantage of the conventional speech coding system is degradation in speech quality.
  • the reason is that the square distance on the weighted vectors is calculated using the same LPC coefficient set within the vector in the codebook searching. Accordingly, if the vector length is long, changes in frequency characteristics of the speech signal in the vector cannot be sufficiently approximated.
  • EP-A-0 342 687 describes a coded speech communication system having code books for synthesizing small-amplitude components.
  • EP-A-0 443 548 discloses a speech coder including an LPC analyzer, an adaptive codebook and two excitation codebooks.
  • US-A-4 975 956 relates to a low-bit-rate speech coder using LPC data reduction processing.
  • a speech coding system comprising, a first means for splitting an input speech signal into a time section and generating the splitted signal as a speech vector, a second means for developing LPC coefficient set by linear prediction analysis for every time section of the speech vector, a third means for weighting the speech vector based on the developed LPC coefficient set, a fourth means for connecting a plurality of the weighted speech vectors and generating connected speech vector having a predetermined frame length, a fifth means storing a plurality of excitation vectors each having the frame length, a sixth means for determining an excitation codevector whose weighted synthesized signal is the most similar to the weighted speech vector among from the plurality of excitation vectors, a seventh means storing a plurality of adaptive codevectors each having the frame length and obtained by cutting out a sound source signal produced from the determined excitation vectors by the sixth means at predetermined timing points, and an eighth means for determining an adaptive codevector whose weighted synthesized
  • LPC coefficient set may be developed by linear prediction analysis for a predetermined time period longer than the time section and LPC coefficient set may be developed by interpolation of the frame LPC coefficient set for the predetermined time period.
  • a speech vector synthesized by a filter is calculated for each speech vector in a given time length of the input speech signal using the LPC coefficient set obtained through linear prediction of the input speech.
  • the developed LPC coefficient sets are used for perceptual weighting the speech vector.
  • the squared sum of the weighted synthesized signal is developed in accordance with the following expression (1).
  • Each codevector having the smallest distance is searched based upon the developed squared sum.
  • k 0 N -1 ( x w k - g ac , j x w ac , k - g ec , j x w ec,k )
  • x w / k represents the k-th element of the perceptual weighted speech vector for the speech vector element x k and is given by:
  • x w k W i ( ⁇ 2 q -1 ) W i ( ⁇ 1 q -1 ) x k
  • w i (l) is the value of l order of the LPC coefficients for the i-th element of the speech vector (which is obtained through linear predictive analysis of the input speech signal in the analysis window including the section) while L is the order of analysis.
  • ⁇ 1 and ⁇ 2 are coefficients for adjusting the perceptual weighting.
  • the expression l is utilized for codebook searching to correspond to any change in frequency response of the speech signal within the vector, thereby improving the quality of the coded speech.
  • FIG. 1 Illustrated in FIG. 1 is a block diagram of one preferred embodiment of the present invention.
  • a speech signal is received at an input terminal 10 and is applied to a frame splitter 100, a perceptual weighting splitter 120 and a synthesis filter splitter 130.
  • the frame splitter 100 splits the speech signal at every frame length (e.g., 5 ms) and the splitted speech signal is supplied to an in-frame splitter 105 as the speech vector.
  • the in-frame splitter 105 further splits the speech vector supplied from the frame splitter 100 into halves, for example, and supplies the fine (in-frame) splitted speech vectors to a weighting filter 110.
  • the perceptual weighting splitter 120 splits the speech signal from the input terminal 10 into, for example 20 ms, window length to develop the LPC coefficient sets to be used for perceptual weighting by an LPC analyzer 125 through the linear prediction analysis.
  • the LPC interpolator 127 calculates interpolation set of the LPC coefficient sets supplied from the LPC analyzer 125 for each splitted speech vector. The interpolation set is then sent to the LPC weighting filter 110, a weighting filter 160 and a weighting filter 195.
  • the synthesis filter splitter 130 splits the speech signal from the input terminal 10 into, for example 20 ms, window length for developing the LPC coefficient sets to be used for synthesis by an LPC analyzer 135 through the linear prediction analysis.
  • An LPC coefficient quantizer 140 quantizes the LPC coefficient set from the LPC analyzer 135 and supplies the quantization index to a multiplexer 300 and also the decoded LPC coefficient set to an LPC interpolator 142.
  • the LPC interpolator 142 calculates interpolation set of the LPC coefficient sets received from the LPC analyzer 135 corresponding to each fine splitted speech vector by using the known method.
  • the calculated interpolation set is sent to a synthesis filter 155 and a synthesis filter 190.
  • the weighting filter 110 performs perceptual weighting on the fine splitted speech vector received from the in-frame splitter 105 using the interpolated LPC coefficient sets received from the LPC interpolator 127 and sends the perceptual weighted splitted speech vector to a connector 115.
  • the connector 115 connects the fine splitted speech vectors received from the weighting filter 110 and sends them to a subtractor 175 and a least square error index searcher 170.
  • An adaptive codebook 145 stores the pitch information of the speech signal such as the past synthesized sound source signal of a predetermined several frames received from an adder 205 (which will be described hereinafter).
  • the adaptive codevector in a given frame length cut out in predetermined timings is sent to an in-frame splitter 150.
  • the in-frame splitter 150 splits the adaptive codevector received from the adaptive codebook 145 into, for example, halves and sends the fine splitted adaptive codevector to a synthesis filter 155.
  • the synthesis filter 155 filters the fine splitted adaptive codevector received from the in-frame splitter 150 using the interpolated LPC coefficient set received from the LPC interpolator 142.
  • the weighting filter 160 is used for perceptual weighting the signal vector synthesized by the synthesis filter 155 in accordance with the interpolated LPC coefficient set received from the LPC interpolator 127.
  • the connector 165 includes a similar buffer memory and connects the perceptual weighted splitted adaptive codevector received from the weighted filter 160.
  • the least square error index searcher 170 calculates the square distance of the weighted synthesized adaptive codevector received from the connector 165 and the weighted speech vector received from the connector 115.
  • the weighted synthesized adaptive codevector is sent to the subtractor 175 when the square distance is minimum.
  • the adaptive codevector received from the adaptive codebook 145 is sent to the adder 205 and its index is sent to the multiplexer 300.
  • the subtractor 175 develops the adaptive codebook residual vectors by subtracting the weighted synthesized adaptive codevector received from the least square error index searcher 170 from the weighted speech vector received from the connector 115.
  • the adaptive codebook residual vector is then sent to a least square error index searcher 207.
  • An excitation codebook 180 sends the excitation codevector to an in-frame splitter 185.
  • the in-frame splitter 185 further splits (fine splits) the excitation codevector received from the excitation codebook 180 into, for example, halves and sends the fine splitted excitation codevector to a synthesis filter 190.
  • the synthesis filter 190 filters the splitted excitation codevector received from the in-frame splitter 185 using the interpolated LPC coefficient set received from the LPC interpolator 142.
  • the weighting filter 195 performs perceptual weighting of the synthesized vector received from the synthesis filter 190 using the interpolated LPC coefficient set received from the LPC interpolator 127.
  • the connector 200 connects the weighted synthesized and fine splitted excitation codevectors received from the weighting filter 195 and sends the connected vector to the least square error index searcher 207.
  • the least square error index searcher 207 develops the square distance between the adaptive codebook residual vector received from the subtractor 175 and the weighted synthesized excitation codevector received from the connector 200.
  • the excitation codevector received from the excitation codebook 180 is sent to the adder 205 and its index is sent to the multiplexer 300.
  • the adder 205 adds the adaptive codevector received from the least square error index searcher 170 and the excitation codevector received from the least square error index searcher 207 and supplies the added result to the adaptive codebook 145.
  • the multiplexer 300 combines the outputs from the LPC quantizer/decoder 140 and the least square error index searcher 170 and also the indexes from the least square error index searcher 207 and sends the combined data to an output terminal 305.
  • the perceptual weighting LPC coefficient sets may be the LPC coefficient sets from the LPC analyzer 135 or the quantized LPC coefficient set.
  • the perceptual weighting splitter 120 and the LPC analyzer 125 are unnecessary.
  • the LPC interpolator 127 can be eliminated if the LPC coefficient sets for perceptual weighting are obtained by performing linear estimation analysis equal to the number of splits in the frame.
  • the number of in-frame splits may be 1.
  • the LPC analyzer may be modified to perform linear prediction analysis of the speech signal for the predetermined window length (e.g., 20 ms) at every period (e.g., 20 ms) equal to multiple times of the frame length.
  • the determination of the searched excitation codevector and adaptive codevector correspond to the determination of the sound source information and the pitch information of the input speech signal.
  • the speech coding apparatus performs codebook search using the weighted synthesized square distance splitted in a frame, thereby enabling to provide improved quality as compared to the conventional method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)

Description

  • The present invention relates to a speech coding system for high quality coding of a speech signal at a low bit rate, e.g., 8 K bps or lower.
  • One example of conventional speech coding system is known as the CELP (Code Excited LPC Coding) disclosed in a technical paper entitled: "CODE-EXCITED LINEAR PREDICTION (CELP): HIGH QUALITY SPEECH AT VERY LOW BIT RATES" IEEE Proc. ICASSP-85, pp 937-940, 1985 by M. Schroeder and B. Atal. In this system, linear predictive analysis is first performed on the speech (voice) signal in a frame of a predetermined period at the transmission side to obtain the linear predictive coefficient sets. In an adaptive codebook are stored a plurality of adaptive codevectors produced by cutting out the previously synthesized sound source signal at predetermined timings. From the adaptive codebook, the adaptive codevector having the smallest square distance is searched based upon the perceptual weighted speech vector and perceptual weighted synthesized adaptive codevectors in the adaptive codebook. The searched synthesized adaptive codevector is subtracted from the weighted speech vector to obtain the residual vector. In an excitation codebook are stored a plurality of excitation codevectors obtained from, for example, noise signal in a predetermined frame. An excitation codevector having the smallest square distance is searched based upon the perceptual weighted synthesized excitation codevector or residual vector. Optimum gains are calculated for the searched adaptive codevector and excitation codevector. Indexes of the searched adaptive codevector, the excitation codevector, and the gains and the linear predictive coefficient set are transmitted. At the receiving side, the speech signal is synthesized based on these indexes.
  • A disadvantage of the conventional speech coding system is degradation in speech quality. The reason is that the square distance on the weighted vectors is calculated using the same LPC coefficient set within the vector in the codebook searching. Accordingly, if the vector length is long, changes in frequency characteristics of the speech signal in the vector cannot be sufficiently approximated.
  • EP-A-0 342 687 describes a coded speech communication system having code books for synthesizing small-amplitude components.
  • EP-A-0 443 548 discloses a speech coder including an LPC analyzer, an adaptive codebook and two excitation codebooks.
  • US-A-4 975 956 relates to a low-bit-rate speech coder using LPC data reduction processing.
  • In view of above, it is an object of the present invention to overcome the above disadvantage and to provide a speech coding system having the codebook searching capable of efficiently quantizing the speech signal. This object is achieved with a speech coding system according to independent claim 1.
  • According to the present invention, there is provided a speech coding system comprising, a first means for splitting an input speech signal into a time section and generating the splitted signal as a speech vector, a second means for developing LPC coefficient set by linear prediction analysis for every time section of the speech vector, a third means for weighting the speech vector based on the developed LPC coefficient set, a fourth means for connecting a plurality of the weighted speech vectors and generating connected speech vector having a predetermined frame length, a fifth means storing a plurality of excitation vectors each having the frame length, a sixth means for determining an excitation codevector whose weighted synthesized signal is the most similar to the weighted speech vector among from the plurality of excitation vectors, a seventh means storing a plurality of adaptive codevectors each having the frame length and obtained by cutting out a sound source signal produced from the determined excitation vectors by the sixth means at predetermined timing points, and an eighth means for determining an adaptive codevector whose weighted synthesized signal is the most similar to the weighted speech vector among the plurality of adaptive codevectors.
  • LPC coefficient set may be developed by linear prediction analysis for a predetermined time period longer than the time section and LPC coefficient set may be developed by interpolation of the frame LPC coefficient set for the predetermined time period.
  • Other objects and features will be clarified from the following description with reference to attached drawing.
  • FIG. 1 is a block diagram of one embodiment of the speech coding system according to the present invention.
  • In operation of the speech coding system according to the present invention, in order to determine the adaptive codevector and the excitation codevector, firstly, a speech vector synthesized by a filter is calculated for each speech vector in a given time length of the input speech signal using the LPC coefficient set obtained through linear prediction of the input speech. Then, the linear predictive analysis is performed in each of the predetermined sections (e.g., two sections of 0 to N/2-1 and N/2 to N-1, where N is vector length = frame length) within the vector to develop the section LPC coefficient sets. The developed LPC coefficient sets are used for perceptual weighting the speech vector. The squared sum of the weighted synthesized signal is developed in accordance with the following expression (1). Each codevector having the smallest distance is searched based upon the developed squared sum. k=0 N-1 (xw k - gac ,j x w ac ,k - gec ,j x w ec,k ) where x w / k represents the k-th element of the perceptual weighted speech vector for the speech vector element xk and is given by: xw k = Wi 2q-1) Wi 1q-1) xk where the weighting function is given by: Wi2 q -1)Wi1 q -1) Wi(q-1) is given by: Wi (q-1 )= 1 + l=1 L wi(l)q-1 q-1 is a shift operator representing time delay at time i satisfying the following expressions: xkq-i = xk-i q-iq-j = q-i-j i represents the number of sections. wi(l) is the value of l order of the LPC coefficients for the i-th element of the speech vector (which is obtained through linear predictive analysis of the input speech signal in the analysis window including the section) while L is the order of analysis. γ1 and γ2 are coefficients for adjusting the perceptual weighting.
  • x w / ac,k is the k-th element of the weighted synthesized adaptive vector of the adaptive codevector element Cac,k (j) for the index j and is given by the following expression: xw ac,k = Wi 2 q -1) Wi 1 q -1) 1 Hi (q -1) Cac,k(j) Hi (q-1 ) = 1 + l=1 L ai(l)q -1 ai(l) is the l order value for the LPC coefficients through, for example, quantizing and decoding corresponding to the speech vector (by way of, for example, linear predictive analysis of the input speech signal in the analysis window including the splitted frame).
    x w / ec,k represents the k-th element of the weighted synthesized excitation vector for the excitation codevector element Cec,k (j) for the index j and is given by the following expression: xw ec,k = Wi 2q-1) Wi 1q-1) 1 Hi (q -1) Cec,k (j) where qac,j and qec,j are optimum gains when the adaptive codevector and the excitation codevector for the index j are searched. The expression l is utilized for codebook searching to correspond to any change in frequency response of the speech signal within the vector, thereby improving the quality of the coded speech.
  • Now, one embodiment of the speech coding system according to the present invention will be described by reference to FIG. 1.
  • Illustrated in FIG. 1 is a block diagram of one preferred embodiment of the present invention. A speech signal is received at an input terminal 10 and is applied to a frame splitter 100, a perceptual weighting splitter 120 and a synthesis filter splitter 130. The frame splitter 100 splits the speech signal at every frame length (e.g., 5 ms) and the splitted speech signal is supplied to an in-frame splitter 105 as the speech vector. The in-frame splitter 105 further splits the speech vector supplied from the frame splitter 100 into halves, for example, and supplies the fine (in-frame) splitted speech vectors to a weighting filter 110.
  • The perceptual weighting splitter 120 splits the speech signal from the input terminal 10 into, for example 20 ms, window length to develop the LPC coefficient sets to be used for perceptual weighting by an LPC analyzer 125 through the linear prediction analysis. The LPC interpolator 127 calculates interpolation set of the LPC coefficient sets supplied from the LPC analyzer 125 for each splitted speech vector. The interpolation set is then sent to the LPC weighting filter 110, a weighting filter 160 and a weighting filter 195.
  • The synthesis filter splitter 130 splits the speech signal from the input terminal 10 into, for example 20 ms, window length for developing the LPC coefficient sets to be used for synthesis by an LPC analyzer 135 through the linear prediction analysis. An LPC coefficient quantizer 140 quantizes the LPC coefficient set from the LPC analyzer 135 and supplies the quantization index to a multiplexer 300 and also the decoded LPC coefficient set to an LPC interpolator 142. The LPC interpolator 142 calculates interpolation set of the LPC coefficient sets received from the LPC analyzer 135 corresponding to each fine splitted speech vector by using the known method. The calculated interpolation set is sent to a synthesis filter 155 and a synthesis filter 190.
  • The weighting filter 110 performs perceptual weighting on the fine splitted speech vector received from the in-frame splitter 105 using the interpolated LPC coefficient sets received from the LPC interpolator 127 and sends the perceptual weighted splitted speech vector to a connector 115. The connector 115 connects the fine splitted speech vectors received from the weighting filter 110 and sends them to a subtractor 175 and a least square error index searcher 170.
  • An adaptive codebook 145 stores the pitch information of the speech signal such as the past synthesized sound source signal of a predetermined several frames received from an adder 205 (which will be described hereinafter). The adaptive codevector in a given frame length cut out in predetermined timings is sent to an in-frame splitter 150. The in-frame splitter 150 splits the adaptive codevector received from the adaptive codebook 145 into, for example, halves and sends the fine splitted adaptive codevector to a synthesis filter 155. The synthesis filter 155 filters the fine splitted adaptive codevector received from the in-frame splitter 150 using the interpolated LPC coefficient set received from the LPC interpolator 142. The weighting filter 160 is used for perceptual weighting the signal vector synthesized by the synthesis filter 155 in accordance with the interpolated LPC coefficient set received from the LPC interpolator 127. The connector 165 includes a similar buffer memory and connects the perceptual weighted splitted adaptive codevector received from the weighted filter 160.
  • The least square error index searcher 170 calculates the square distance of the weighted synthesized adaptive codevector received from the connector 165 and the weighted speech vector received from the connector 115. The weighted synthesized adaptive codevector is sent to the subtractor 175 when the square distance is minimum. The adaptive codevector received from the adaptive codebook 145 is sent to the adder 205 and its index is sent to the multiplexer 300.
  • The subtractor 175 develops the adaptive codebook residual vectors by subtracting the weighted synthesized adaptive codevector received from the least square error index searcher 170 from the weighted speech vector received from the connector 115. The adaptive codebook residual vector is then sent to a least square error index searcher 207.
  • An excitation codebook 180 sends the excitation codevector to an in-frame splitter 185. The in-frame splitter 185 further splits (fine splits) the excitation codevector received from the excitation codebook 180 into, for example, halves and sends the fine splitted excitation codevector to a synthesis filter 190. The synthesis filter 190 filters the splitted excitation codevector received from the in-frame splitter 185 using the interpolated LPC coefficient set received from the LPC interpolator 142. The weighting filter 195 performs perceptual weighting of the synthesized vector received from the synthesis filter 190 using the interpolated LPC coefficient set received from the LPC interpolator 127. The connector 200 connects the weighted synthesized and fine splitted excitation codevectors received from the weighting filter 195 and sends the connected vector to the least square error index searcher 207. The least square error index searcher 207 develops the square distance between the adaptive codebook residual vector received from the subtractor 175 and the weighted synthesized excitation codevector received from the connector 200. When the minimum square distance is searched, the excitation codevector received from the excitation codebook 180 is sent to the adder 205 and its index is sent to the multiplexer 300.
  • The adder 205 adds the adaptive codevector received from the least square error index searcher 170 and the excitation codevector received from the least square error index searcher 207 and supplies the added result to the adaptive codebook 145.
  • The multiplexer 300 combines the outputs from the LPC quantizer/decoder 140 and the least square error index searcher 170 and also the indexes from the least square error index searcher 207 and sends the combined data to an output terminal 305.
  • In the above coding system, the perceptual weighting LPC coefficient sets may be the LPC coefficient sets from the LPC analyzer 135 or the quantized LPC coefficient set. In this case, the perceptual weighting splitter 120 and the LPC analyzer 125 are unnecessary. The LPC interpolator 127 can be eliminated if the LPC coefficient sets for perceptual weighting are obtained by performing linear estimation analysis equal to the number of splits in the frame. The number of in-frame splits may be 1. Also, the LPC analyzer may be modified to perform linear prediction analysis of the speech signal for the predetermined window length (e.g., 20 ms) at every period (e.g., 20 ms) equal to multiple times of the frame length.
  • In the foregoing embodiments, the determination of the searched excitation codevector and adaptive codevector correspond to the determination of the sound source information and the pitch information of the input speech signal.
  • As understood from the above description, the speech coding apparatus according to the present invention performs codebook search using the weighted synthesized square distance splitted in a frame, thereby enabling to provide improved quality as compared to the conventional method.

Claims (3)

  1. A speech coding system comprising:
    a) means (100) for splitting an input speech signal (10) and generating speech vectors;
    b) means (105) for further splitting said speech vectors and generating speech subvectors;
    c) means (110) for weighting said speech subvectors by using weighting filter coefficients;
    d) means (120, 125, 127) for receiving the input speech signal (10) and for calculating said weighting filter coefficients for each of said speech subvectors;
    e) means (130, 135, 140, 142) for receiving the input speech signal (10) and for calculating synthesis filter coefficients for each of said speech subvectors;
    f) adaptive codebook means (145) for storing a plurality of adaptive codevectors each having the same length as said speech vectors, generated on the basis of the sound source signal reproduced in the past;
    g) means (150, 155, 160, 165) for synthesizing and weighting said adaptive codevectors for each time section corresponding to each speech subvector by using said synthesis filter coefficients and said weighting filter coefficients;
    h) means (170) for selecting an adaptive codevector for a speech subvector by comparing the weighted speech subvector with said weighted synthesized adaptive codevectors;
    i) means (180) for storing a plurality of excitation codevectors each having the same length as said speech vectors;
    l) means (185, 190, 195, 200) for synthesizing and weighting said excitation codevectors for each time section corresponding to each speech subvector by using said synthesis filter coefficients and said weighting filter coefficients; and
    m) means (175, 207) for selecting an excitation codevector for a speech subvector by comparing said weighted synthesized excitation codevectors with a function of the weighted speech subvector and the subvector obtained by synthesizing and weighting the selected adaptive codevector.
  2. The speech coding system according to claim 1, wherein the weighting filter coefficients for each speech subvector are calculated by using, as a weighting filter coefficient corresponding to a speech subvector other than the last part of the speech vector, coefficients obtained by interpolating weighting filter coefficients corresponding to the speech subvector in the last part of a speech vector in the past and weighting filter coefficients corresponding to the speech subvector in the last part of the pertinent speech vector.
  3. The speech coding system according to claim 1, wherein LPC coefficients are used as the weighting filter coefficients.
EP93102794A 1992-02-24 1993-02-23 Speech coding system Expired - Lifetime EP0557940B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP35881/92 1992-02-24
JP3588192 1992-02-24
JP03588192A JP3248215B2 (en) 1992-02-24 1992-02-24 Audio coding device

Publications (3)

Publication Number Publication Date
EP0557940A2 EP0557940A2 (en) 1993-09-01
EP0557940A3 EP0557940A3 (en) 1994-03-23
EP0557940B1 true EP0557940B1 (en) 2000-09-27

Family

ID=12454351

Family Applications (1)

Application Number Title Priority Date Filing Date
EP93102794A Expired - Lifetime EP0557940B1 (en) 1992-02-24 1993-02-23 Speech coding system

Country Status (4)

Country Link
EP (1) EP0557940B1 (en)
JP (1) JP3248215B2 (en)
CA (1) CA2090205C (en)
DE (1) DE69329476T2 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2591430B2 (en) * 1993-06-30 1997-03-19 日本電気株式会社 Vector quantizer
MX9700278A (en) 1995-05-10 1997-05-31 Nintendo Co Ltd Operating device with analog joystick.
EP0797139B1 (en) 1995-10-09 2003-06-18 Nintendo Co., Limited Three-dimensional image processing system
JP3544268B2 (en) 1995-10-09 2004-07-21 任天堂株式会社 Three-dimensional image processing apparatus and image processing method using the same
JP3524247B2 (en) 1995-10-09 2004-05-10 任天堂株式会社 Game machine and game machine system using the same
US6022274A (en) 1995-11-22 2000-02-08 Nintendo Co., Ltd. Video game system using memory module
US6267673B1 (en) 1996-09-20 2001-07-31 Nintendo Co., Ltd. Video game system with state of next world dependent upon manner of entry from previous world via a portal
US6190257B1 (en) 1995-11-22 2001-02-20 Nintendo Co., Ltd. Systems and method for providing security in a video game system
TW419645B (en) * 1996-05-24 2001-01-21 Koninkl Philips Electronics Nv A method for coding Human speech and an apparatus for reproducing human speech so coded
US6241610B1 (en) 1996-09-20 2001-06-05 Nintendo Co., Ltd. Three-dimensional image processing system having dynamically changing character polygon number
US6139434A (en) 1996-09-24 2000-10-31 Nintendo Co., Ltd. Three-dimensional image processing apparatus with enhanced automatic and user point of view control
JP3655438B2 (en) 1997-07-17 2005-06-02 任天堂株式会社 Video game system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0342687B1 (en) * 1988-05-20 1995-04-12 Nec Corporation Coded speech communication system having code books for synthesizing small-amplitude components
US4975956A (en) * 1989-07-26 1990-12-04 Itt Corporation Low-bit-rate speech coder using LPC data reduction processing
DE69133296T2 (en) * 1990-02-22 2004-01-29 Nec Corp speech
JPH05108096A (en) * 1991-10-18 1993-04-30 Sanyo Electric Co Ltd Vector drive type speech encoding device

Also Published As

Publication number Publication date
JPH05232997A (en) 1993-09-10
JP3248215B2 (en) 2002-01-21
CA2090205C (en) 1998-08-04
EP0557940A2 (en) 1993-09-01
CA2090205A1 (en) 1993-08-25
DE69329476D1 (en) 2000-11-02
DE69329476T2 (en) 2001-02-08
EP0557940A3 (en) 1994-03-23

Similar Documents

Publication Publication Date Title
US5142584A (en) Speech coding/decoding method having an excitation signal
EP1221694B1 (en) Voice encoder/decoder
US5140638A (en) Speech coding system and a method of encoding speech
US6023672A (en) Speech coder
EP0957472B1 (en) Speech coding apparatus and speech decoding apparatus
CA2061830C (en) Speech coding system
EP0557940B1 (en) Speech coding system
EP0778561B1 (en) Speech coding device
US6009388A (en) High quality speech code and coding method
EP1005022A1 (en) Speech encoding method and speech encoding system
US5873060A (en) Signal coder for wide-band signals
CA2232446C (en) Coding and decoding system for speech and musical sound
US5797119A (en) Comb filter speech coding with preselected excitation code vectors
US4945567A (en) Method and apparatus for speech-band signal coding
JPH0944195A (en) Voice encoding device
US5692101A (en) Speech coding method and apparatus using mean squared error modifier for selected speech coder parameters using VSELP techniques
US5884252A (en) Method of and apparatus for coding speech signal
US6751585B2 (en) Speech coder for high quality at low bit rates
EP1154407A2 (en) Position information encoding in a multipulse speech coder
JP3319396B2 (en) Speech encoder and speech encoder / decoder
JP3299099B2 (en) Audio coding device
JPH08185199A (en) Voice coding device
JP3192051B2 (en) Audio coding device
EP0780832A2 (en) Speech coding device for estimating an error of power envelopes of synthetic and input speech signals
JP3099836B2 (en) Excitation period encoding method for speech

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB

17P Request for examination filed

Effective date: 19940208

17Q First examination report despatched

Effective date: 19961219

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RIC1 Information provided on ipc code assigned before grant

Free format text: 7G 10L 19/12 A

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REF Corresponds to:

Ref document number: 69329476

Country of ref document: DE

Date of ref document: 20001102

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20070215

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20070221

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20070208

Year of fee payment: 15

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20080223

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20081031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080902

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080229

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080223