[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20030144840A1 - Method and apparatus for speech detection using time-frequency variance - Google Patents

Method and apparatus for speech detection using time-frequency variance Download PDF

Info

Publication number
US20030144840A1
US20030144840A1 US10/060,511 US6051102A US2003144840A1 US 20030144840 A1 US20030144840 A1 US 20030144840A1 US 6051102 A US6051102 A US 6051102A US 2003144840 A1 US2003144840 A1 US 2003144840A1
Authority
US
United States
Prior art keywords
speech
power
band
variance
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/060,511
Other versions
US7299173B2 (en
Inventor
Changxue Ma
Mark Randolph
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google Technology Holdings LLC
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Priority to US10/060,511 priority Critical patent/US7299173B2/en
Assigned to MOTOROLA, INC. reassignment MOTOROLA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MA, CHANGXUE, RANDOLPH, MARK
Priority to PCT/US2002/040533 priority patent/WO2003065352A1/en
Publication of US20030144840A1 publication Critical patent/US20030144840A1/en
Application granted granted Critical
Publication of US7299173B2 publication Critical patent/US7299173B2/en
Assigned to Motorola Mobility, Inc reassignment Motorola Mobility, Inc ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOTOROLA, INC
Assigned to MOTOROLA MOBILITY LLC reassignment MOTOROLA MOBILITY LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: MOTOROLA MOBILITY, INC.
Assigned to Google Technology Holdings LLC reassignment Google Technology Holdings LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOTOROLA MOBILITY LLC
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

Definitions

  • the present invention relates to speech detection and, more particularly, relates to improved approaches to efficiently detect speech presence in a noisy environment by way of frequency and temporal considerations.
  • automatic speech recognition needs to be activated by uttering a particular word sequence such as keywords.
  • a particular word sequence such as keywords.
  • keywords For example, if a desktop personal computer has a speech recognizer for dictation or command control, it is desirable to activate the recognizer in the middle of the conversations in his or her office by uttering a keyword. This process of recognizing the keyword from continuous speech waveform is called keyword scanning. This would require the recognizer constantly recognizing the incoming speech and spotting those keywords. Nevertheless, the recognizer cannot be used to constantly monitor the incoming speech because it takes huge computational resources. Some other techniques that demand much less computations and memories have to be utilized to reduce the burden of speech recognizer.
  • Speech detection techniques are ways of eliminating silence segments from speech utterances so that speech recognizer can be speed up and do not wasting a lot of time on those silences or even misrecognize silence as speech.
  • Speech detection techniques are often based on the speech waveform and utilize features such as short-time energy, zero crossing and etc. The same can be used to hypothesize keyword if some other features such as pitch, duration and voicing can be used in junction with word end-pointing techniques.
  • the keyword hypothesis will be over generated, it still can reduce a large proportion of computations since the recognizer will only process these hypotheses.
  • the inventors of the present invention have discovered that there is a high variance associated with voiced speech such as vowels and the low variance associated with silences and wide-band noise. Speech presence can be efficiently detected in a noisy environment by way of frequency and temporal considerations using this variance.
  • Speech presence is detected by first bandpass filtering the speech to split it into banks of sub-bands.
  • a matrix of shift registers secondly store each sub-band of speech.
  • a power determining circuit determines individual power measurements of the speech stored in each shift register element.
  • a combining circuit combines the individual power measurements to provide a variance for the individual shift registers.
  • a comparitor circuit finally compares the variance with at least one threshold to indicate whether speech is detected.
  • the present invention can be implemented by software in a microprocessor, digital signal processor or combinations with discrete components.
  • FIG. 1 illustrates a schematic block diagram of a time-frequency matrix and variance circuit for speech detection according to the present invention
  • FIG. 2 illustrates a detailed schematic block diagram of one matrix element of FIG. 1 for determining power measurements used in the speech detection according to the present invention
  • FIG. 3 illustrates a flow chart diagram for performing time-frequency matrix to detect speech according to the present invention.
  • FIG. 1 illustrates a schematic block diagram of the time-frequency matrix and variance circuit for speech detection according to the present invention.
  • a microphone 110 gathers speech often in a noisy environment.
  • amplifier and analog to digital converter 120 amplifies and conditions the electrical speech signal received by the microphone 110 and converts the electrical speech signal to digital speech sampled in time.
  • the digital speech is sampled at preferably an 8 kHz sampling frequency and stored in frames preferably having a 10 millisecond duration.
  • a preemphasis circuit 130 operates on the digital speech to equalize its power spectrum to make its frequency spectrum more flat.
  • a digital signal processing emphasis of 1-0.9 Z ⁇ 1 is preferred to equalize the input signal and derive a preemphasized output signal.
  • Low band bandpass filter 141 , mid band bandpass filter 143 and high band bandpass filter 145 split the preemphasized digital speech signal into a bank of preferably three sub-bands. Although a bank of three sub-bands is preferred, two or more sub-bands will work depending on the level of processing power and degree of detection accuracy needed for a noisy environment. It is preferred that the bandpass filters 141 , 143 and 145 divide the speech signal into somewhat equal sub-bands between 100 Hz and 3,000 Hz as follows.
  • the low band bandpass filter 141 preferably has a band between 100 Hz and 1267 Hz
  • the mid and bandpass filter 143 preferably has a bandpass between 1267 Hz and 2433 Hz.
  • the high band bandpass filter 145 preferably has a bandpass between 2433 Hz and 3600 Hz. Different band widths can be used for each sub-band.
  • a matrix of shift registers 150 receives the three sub-bands from the bandpass filters 141 , 143 and 145 .
  • the shift registers 150 store each of the sub-bands and shifted to a next register location for each frame.
  • a total of three frames are stored in the shift registers, thus creating a three-by-three matrix Y ij consisting of matrix elements Y 11 , Y 12 , Y 13 , Y 21 , Y 22 , Y 23 , Y 31 , Y 32 and Y 33 .
  • This matrix stores the speech information by way of both frequency and temporal considerations.
  • Each of the three-by-three matrix elements contains sub-registers 250 for storing multiple samples k within a frame.
  • a power measurement X ij is derived from the contents of the sub-registers.
  • j is a frequency sub-band index
  • k is the sample index within a frame
  • S ijk is the speech samples for a given frame index i, a given frequency sub-band j and a given sample index k.
  • the calculations of the power measurements X ij are preferably calculated within each of the matrix elements Y ij of the shift register 150 .
  • the power measurement calculation sums the squares of each of the power samples for a particular sub-band over time. More detail for the preferred calculation of the power measurement for a sub-band across a number of samples in the shift register elements will later be described with reference to FIG. 2 in more detail.
  • a variance combining circuit 160 can be performed calculations of the power measurements.
  • a variance is a mathematical relationship known in digital speech processing as defined in elementary digital signal processing textbooks as such as Digital Communications , equations 1.1.65 or 1.1.66, by Proakis on page 17, published in 1989.
  • the present invention applies a variance to a time-frequency power measurement to detect speech presence.
  • j is a frequency sub-band index
  • X ij is the power for a given time sample index i and a given frequency sub-band j.
  • a comparator 170 compares the variance VAR with a threshold to determine whether or not the presence of speech is detected. When the variance is above the threshold, the presence of speech is detected, and a speech detection indication signal 180 is output.
  • the threshold is preferably a fixed level however a variable threshold under certain conditions will yield more favorable results.
  • a variable threshold can depend on determined by using an average of the past history of non-speech frames. Further, multiple thresholds can be implemented, one for clearly speech, one for clearly unspeech. A decision is made upon a transition over either of these thresholds.
  • the presence of speech indicated by the speech detection indication signal 180 can be used to gate on and off a speech recognition unit.
  • the detection of the presence of speech is useful to gate and off a speech recognition unit so that the speech recognition unit does not need to operate continuously. This saves processing time that can be used for other purposes and/or conserves power, which reduces battery consumption in a portable electronic device.
  • a speech recognition circuit is present in a portable electronic device such as a cellular telephone, battery savings are achieved by freeing up the processor for other functions when speech presence is accurately determined.
  • the speech presence detection circuit does not require full activation of a recognition code so its more efficient. Reduction of miss-recognition is also achieved when using better speech presence accuracy.
  • the speech detection indications are also useful for other devices such as speaker phones.
  • FIG. 2 illustrates a detailed schematic block diagram of the preferred construction of a plurality of sub-registers 250 and a power calculation circuit 259 for determining power measurements used in the speech detection according to the present invention.
  • the preferred calculation of the power measurement for a sub-band, across a number of samples in one matrix element, is illustrated.
  • the a plurality of sub-registers 250 and a power calculation circuit 259 are within one of the nine three-by-three matrix elements Y ij illustrated in FIG. 1.
  • a plurality 250 of sub-register elements 251 , 252 , 253 through 255 receive the filtered sub-band speech from a bandpass filter of FIG. 1.
  • Each sub-register element contains a speech sample S ijk for a given time and frequency sub-band.
  • Sub-register element 252 corresponds to a second sample index and sub-register element 253 corresponds to a third sample index.
  • a total of up to n sample indexes k are possible.
  • a power calculation circuit 259 calculates the average power among the sub-register elements for the given frame i and sub-band j.
  • the average power X ij is calculated using the above equation (1).
  • Each power calculation circuit 259 corresponds to one of the shift register elements in the matrix of FIG. 1.
  • the output of the power calculation circuit 259 connects to the variance combining circuit 160 of FIG. 1.
  • FIG. 3 illustrates a flow chart diagram for performing time-frequency matrix to detect speech according to the present invention.
  • speech is received, often in a noisy environment.
  • the received speech is preemphasized to improve recognition accuracy by equalizing the power spectrum of the speech signal to flatten its frequency spectrum.
  • step 330 to the speech is bandpass filtered into sub-bands.
  • a power calculation is made in step 340 for the various samples over the various sub-bands.
  • a power calculation is made in step 342 over the samples for the various sub-bands after delaying one frame in step 341 .
  • a power calculation is made in step 344 over the samples for the various sub-bands after delaying to frames in step 343 .
  • a variance is calculated using the power calculations derived above over frequency and over time. This variance is compared in step 360 with at least one threshold 370 to indicate that speech presence is detected at output 380 when the variance is above the threshold.
  • DSPs digital signal processors
  • other microprocessors such techniques could instead be implemented wholly or partially as discrete components.
  • certain well known digital processing techniques are mathematically equivalent to one another and can be represented in different ways depending on the choice of implementation. For example the square of the terms in the variance calculation and/or power calculation can be substituted for absolute values without affecting the results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Speech presence is detected by first bandpass filtering (141, 143, 145) the speech to split it into banks of sub-bands. A matrix of shift registers (150) store each sub-band of speech. A power determining circuit (259) then determines individual power measurements of the speech stored in each shift register element. A variance combining circuit (160) combines the individual power measurements to provide a variance for the individual shift registers. A comparitor circuit (170) finally compares the variance with at least one threshold to indicate whether speech is detected.

Description

    BACKGROUND OF THE INVENTION
  • 1. Technical Field [0001]
  • The present invention relates to speech detection and, more particularly, relates to improved approaches to efficiently detect speech presence in a noisy environment by way of frequency and temporal considerations. [0002]
  • 2. Description of the Related Art [0003]
  • In some applications, automatic speech recognition needs to be activated by uttering a particular word sequence such as keywords. For example, if a desktop personal computer has a speech recognizer for dictation or command control, it is desirable to activate the recognizer in the middle of the conversations in his or her office by uttering a keyword. This process of recognizing the keyword from continuous speech waveform is called keyword scanning. This would require the recognizer constantly recognizing the incoming speech and spotting those keywords. Nevertheless, the recognizer cannot be used to constantly monitor the incoming speech because it takes huge computational resources. Some other techniques that demand much less computations and memories have to be utilized to reduce the burden of speech recognizer. It is known that speech detection techniques are ways of eliminating silence segments from speech utterances so that speech recognizer can be speed up and do not wasting a lot of time on those silences or even misrecognize silence as speech. Speech detection techniques are often based on the speech waveform and utilize features such as short-time energy, zero crossing and etc. The same can be used to hypothesize keyword if some other features such as pitch, duration and voicing can be used in junction with word end-pointing techniques. Although the keyword hypothesis will be over generated, it still can reduce a large proportion of computations since the recognizer will only process these hypotheses. [0004]
  • Most speech recognition applications today face the challenging task of segmenting speech based on voice, unvoice & silence detection. A conventional approach is detecting short-term energy and zero crossings of a speech signal. These approaches are not reliable for noisy telephone speech signals due, in part, to the greater noise in a background environment of most telephone conversations. For example, stationary noise such as motor or wind noise and non-stationary noise such as door openings, closing or respiratory exhalation are present in telephone speech. [0005]
  • Accurate speech presence detection also conserves power and processing time for portable electronic devices such as cellular telephones. When reliable speech detection approaches are used, a speech recognition algorithm must find the utterances to determine if they are in fact language. This places a burden on computational complexity of processors and is a resource drain on portable electronic devices. A speech detection approach having computational efficiency as well as accuracy is needed. [0006]
  • SUMMARY OF THE INVENTION
  • The inventors of the present invention have discovered that there is a high variance associated with voiced speech such as vowels and the low variance associated with silences and wide-band noise. Speech presence can be efficiently detected in a noisy environment by way of frequency and temporal considerations using this variance. [0007]
  • Speech presence is detected by first bandpass filtering the speech to split it into banks of sub-bands. A matrix of shift registers secondly store each sub-band of speech. A power determining circuit then determines individual power measurements of the speech stored in each shift register element. A combining circuit combines the individual power measurements to provide a variance for the individual shift registers. A comparitor circuit finally compares the variance with at least one threshold to indicate whether speech is detected. The present invention can be implemented by software in a microprocessor, digital signal processor or combinations with discrete components. [0008]
  • The details of the preferred embodiments of the invention will be readily understood from the following detailed description when read in conjunction with the accompanying drawings wherein:[0009]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a schematic block diagram of a time-frequency matrix and variance circuit for speech detection according to the present invention; [0010]
  • FIG. 2 illustrates a detailed schematic block diagram of one matrix element of FIG. 1 for determining power measurements used in the speech detection according to the present invention; and [0011]
  • FIG. 3 illustrates a flow chart diagram for performing time-frequency matrix to detect speech according to the present invention.[0012]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 illustrates a schematic block diagram of the time-frequency matrix and variance circuit for speech detection according to the present invention. A [0013] microphone 110 gathers speech often in a noisy environment. In amplifier and analog to digital converter 120 amplifies and conditions the electrical speech signal received by the microphone 110 and converts the electrical speech signal to digital speech sampled in time. In the preferred embodiment, the digital speech is sampled at preferably an 8 kHz sampling frequency and stored in frames preferably having a 10 millisecond duration. A preemphasis circuit 130 operates on the digital speech to equalize its power spectrum to make its frequency spectrum more flat. A digital signal processing emphasis of 1-0.9 Z−1 is preferred to equalize the input signal and derive a preemphasized output signal.
  • Low [0014] band bandpass filter 141, mid band bandpass filter 143 and high band bandpass filter 145 split the preemphasized digital speech signal into a bank of preferably three sub-bands. Although a bank of three sub-bands is preferred, two or more sub-bands will work depending on the level of processing power and degree of detection accuracy needed for a noisy environment. It is preferred that the bandpass filters 141,143 and 145 divide the speech signal into somewhat equal sub-bands between 100 Hz and 3,000 Hz as follows. The low band bandpass filter 141 preferably has a band between 100 Hz and 1267 Hz, the mid and bandpass filter 143 preferably has a bandpass between 1267 Hz and 2433 Hz. The high band bandpass filter 145 preferably has a bandpass between 2433 Hz and 3600 Hz. Different band widths can be used for each sub-band.
  • A matrix of [0015] shift registers 150 receives the three sub-bands from the bandpass filters 141, 143 and 145. The shift registers 150 store each of the sub-bands and shifted to a next register location for each frame. In the preferred embodiment a total of three frames are stored in the shift registers, thus creating a three-by-three matrix Yij consisting of matrix elements Y11, Y12, Y13, Y21, Y22, Y23, Y31, Y32 and Y33. This matrix stores the speech information by way of both frequency and temporal considerations. Each of the three-by-three matrix elements contains sub-registers 250 for storing multiple samples k within a frame. For each of the register memories of the shift registers 150, a power measurement Xij is derived from the contents of the sub-registers. The calculation of the power measurements Xij for each sub-band over a frame i within a preferred 10 ms frame duration is performed by X ij = k s ijk 2 ( 1 )
    Figure US20030144840A1-20030731-M00001
  • wherein i is the frame index; [0016]
  • wherein j is a frequency sub-band index; [0017]
  • wherein k is the sample index within a frame; and [0018]
  • wherein S[0019] ijk is the speech samples for a given frame index i, a given frequency sub-band j and a given sample index k.
  • The calculations of the power measurements X[0020] ij are preferably calculated within each of the matrix elements Yij of the shift register 150. The power measurement calculation sums the squares of each of the power samples for a particular sub-band over time. More detail for the preferred calculation of the power measurement for a sub-band across a number of samples in the shift register elements will later be described with reference to FIG. 2 in more detail. Alternatively, a variance combining circuit 160 can be performed calculations of the power measurements.
  • The inventors of the present invention have discovered there is a high variance associated with voiced speech such as vowels and the low variance associated with silences and wide-band noise. A variance is a mathematical relationship known in digital speech processing as defined in elementary digital signal processing textbooks as such as [0021] Digital Communications, equations 1.1.65 or 1.1.66, by Proakis on page 17, published in 1989. The present invention applies a variance to a time-frequency power measurement to detect speech presence.
  • A [0022] variance combining circuit 160 calculates the variance of the plurality of power measurements for each sub-band and each frame. Calculating the variance VAR of the plurality of power measurements Xij for each sub-band j for each frame index i is calculated by VAR = X ij 2 n - ( X ij n ) 2 ( 2 )
    Figure US20030144840A1-20030731-M00002
  • wherein i is the frame index; [0023]
  • wherein j is a frequency sub-band index; [0024]
  • wherein X[0025] ij is the power for a given time sample index i and a given frequency sub-band j.
  • A [0026] comparator 170 compares the variance VAR with a threshold to determine whether or not the presence of speech is detected. When the variance is above the threshold, the presence of speech is detected, and a speech detection indication signal 180 is output. The threshold is preferably a fixed level however a variable threshold under certain conditions will yield more favorable results. A variable threshold can depend on determined by using an average of the past history of non-speech frames. Further, multiple thresholds can be implemented, one for clearly speech, one for clearly unspeech. A decision is made upon a transition over either of these thresholds.
  • The presence of speech indicated by the speech [0027] detection indication signal 180 can be used to gate on and off a speech recognition unit. The detection of the presence of speech is useful to gate and off a speech recognition unit so that the speech recognition unit does not need to operate continuously. This saves processing time that can be used for other purposes and/or conserves power, which reduces battery consumption in a portable electronic device. When a speech recognition circuit is present in a portable electronic device such as a cellular telephone, battery savings are achieved by freeing up the processor for other functions when speech presence is accurately determined. Also, the speech presence detection circuit does not require full activation of a recognition code so its more efficient. Reduction of miss-recognition is also achieved when using better speech presence accuracy. The speech detection indications are also useful for other devices such as speaker phones.
  • FIG. 2 illustrates a detailed schematic block diagram of the preferred construction of a plurality of [0028] sub-registers 250 and a power calculation circuit 259 for determining power measurements used in the speech detection according to the present invention. The preferred calculation of the power measurement for a sub-band, across a number of samples in one matrix element, is illustrated. The a plurality of sub-registers 250 and a power calculation circuit 259 are within one of the nine three-by-three matrix elements Yij illustrated in FIG. 1. A plurality 250 of sub-register elements 251, 252, 253 through 255 receive the filtered sub-band speech from a bandpass filter of FIG. 1. Each sub-register element contains a speech sample Sijk for a given time and frequency sub-band. Sub-register element 251 corresponds to a first sample index k=1 within a frame for a given frame i and sub-band j. Sub-register element 252 corresponds to a second sample index and sub-register element 253 corresponds to a third sample index. A total of up to n sample indexes k are possible.
  • A [0029] power calculation circuit 259 calculates the average power among the sub-register elements for the given frame i and sub-band j. The average power Xij is calculated using the above equation (1). Each power calculation circuit 259 corresponds to one of the shift register elements in the matrix of FIG. 1. The output of the power calculation circuit 259 connects to the variance combining circuit 160 of FIG. 1.
  • FIG. 3 illustrates a flow chart diagram for performing time-frequency matrix to detect speech according to the present invention. In [0030] step 310, speech is received, often in a noisy environment. In step 320 the received speech is preemphasized to improve recognition accuracy by equalizing the power spectrum of the speech signal to flatten its frequency spectrum. In step 330 to the speech is bandpass filtered into sub-bands. A power calculation is made in step 340 for the various samples over the various sub-bands. A power calculation is made in step 342 over the samples for the various sub-bands after delaying one frame in step 341. A power calculation is made in step 344 over the samples for the various sub-bands after delaying to frames in step 343. In step 350, a variance is calculated using the power calculations derived above over frequency and over time. This variance is compared in step 360 with at least one threshold 370 to indicate that speech presence is detected at output 380 when the variance is above the threshold.
  • The signal processing techniques of the present invention disclosed herein with reference to the accompanying drawings are preferably implemented on one or more digital signal processors (DSPs) or other microprocessors. Nevertheless, such techniques could instead be implemented wholly or partially as discrete components. Further, it is appreciated by those of skill in the art that certain well known digital processing techniques are mathematically equivalent to one another and can be represented in different ways depending on the choice of implementation. For example the square of the terms in the variance calculation and/or power calculation can be substituted for absolute values without affecting the results. [0031]
  • Although the invention has been described and illustrated in the above description and drawings, it is understood that this description is by example only, and that numerous changes and modifications can be made by those skilled in the art without departing from the true spirit and scope of the invention. Although the examples in the drawings depict only example constructions and embodiments, alternate embodiments are available given the teachings of the present patent disclosure.[0032]

Claims (10)

What is claimed is:
1. A speech presence detection apparatus, comprising:
a plurality of bandpass filters for splitting speech into a bank of sub-bands;
a plurality of shift registers each connected to and associated with one of the bandpass filters for storing the speech of a corresponding sub-band in register elements;
a power determining circuit for determining individual power measurements of the speech stored in each register element;
a variance combining circuit for combining the individual power measurements to provide a variance for the individual registers; and
a comparitor circuit for comparing the variance with a threshold to indicate whether speech is detected.
2. A method of detecting the presence of speech, comprising the steps of:
(a) calculating a plurality of power samples of speech, each power sample corresponding to a frequency sub-band and time frame of the speech; and
(b) calculating a variance of the plurality of power samples; and
(c) comparing the variance with at least one threshold to indicate whether speech is detected.
3. A method according to claim 2, wherein the calculation in step (a) of the plurality of power samples of the speech over time and frequency comprises calculating a power corresponding to different audible bands and different sampling periods.
4. A method according to claim 2, wherein the calculation in step (a) of the plurality of power samples of the speech over time and frequency comprises the substeps of (a1) bandpass filtering the speech into banks of sub-bands; (a2) storing the speech of a corresponding sub-band; and (a3) calculating a power of the sub-band over a frame.
5. A method according to claim 2, wherein step (a) of calculating a plurality of power samples of speech comprises
X ij = k s ijk 2
Figure US20030144840A1-20030731-M00003
wherein i is the frame index;
wherein j is a frequency sub-band index;
wherein k is the sample index within a frame; and
wherein Sijk is the speech samples for a given frame index i, a given frequency sub-band j and a given sample index k.
6. A method according to claim 2, wherein step (b) of calculating a variance of the plurality of power measurements comprises
VAR = X ij 2 n - ( X ij n ) 2
Figure US20030144840A1-20030731-M00004
wherein i is a frame index;
wherein j is a frequency sub-band index;
wherein Xij is the power measurement for a given time sample index i and a given frequency sub-band j.
7. A method according to claim 6, wherein the step (a) of calculating each power measurement comprises
X ij = k s ijk 2
Figure US20030144840A1-20030731-M00005
wherein i is the frame index;
wherein j is a frequency sub-band index;
wherein k is a sample index within a frame; and
wherein Sijk is the speech samples for a given frame index i, a given frequency sub-band j and a given sample index k.
8. A method according to claim 2, wherein the calculation in step (c) of comparing the variance with at least one threshold indicates that speech is detected when the variance is above a threshold.
9. An apparatus for detecting the presence of speech, comprising:
means for calculating a plurality of power samples of speech, each power sample corresponding to a frequency sub-band and time frame of the speech;
means for calculating a variance of the plurality of power samples; and
means for comparing the variance with at least one threshold to indicate whether speech is detected.
10. An apparatus according to claim 9, wherein the means for calculating a variance of the plurality of power samples comprises
VAR = X ij 2 n - ( X ij n ) 2
Figure US20030144840A1-20030731-M00006
wherein i is a frame index;
wherein j is a frequency sub-band index;
wherein Xij is the power for a given time sample index i and a given frequency sub-band j.
US10/060,511 2002-01-30 2002-01-30 Method and apparatus for speech detection using time-frequency variance Expired - Lifetime US7299173B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/060,511 US7299173B2 (en) 2002-01-30 2002-01-30 Method and apparatus for speech detection using time-frequency variance
PCT/US2002/040533 WO2003065352A1 (en) 2002-01-30 2002-12-18 Method and apparatus for speech detection using time-frequency variance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/060,511 US7299173B2 (en) 2002-01-30 2002-01-30 Method and apparatus for speech detection using time-frequency variance

Publications (2)

Publication Number Publication Date
US20030144840A1 true US20030144840A1 (en) 2003-07-31
US7299173B2 US7299173B2 (en) 2007-11-20

Family

ID=27610002

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/060,511 Expired - Lifetime US7299173B2 (en) 2002-01-30 2002-01-30 Method and apparatus for speech detection using time-frequency variance

Country Status (2)

Country Link
US (1) US7299173B2 (en)
WO (1) WO2003065352A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030231719A1 (en) * 2002-06-18 2003-12-18 Wreschner Kenneth Solomon System and method for adaptive matched filter signal parameter measurement
US20050119881A1 (en) * 2003-12-02 2005-06-02 Seidman James L. Method for automatic gain control of encoded digital audio streams
US20060080089A1 (en) * 2004-10-08 2006-04-13 Matthias Vierthaler Circuit arrangement and method for audio signals containing speech
EP1903833A1 (en) * 2006-09-21 2008-03-26 Phonic Ear Incorporated Feedback cancellation in a sound system
US20080085013A1 (en) * 2006-09-21 2008-04-10 Phonic Ear Inc. Feedback cancellation in a sound system
US20080107277A1 (en) * 2006-10-12 2008-05-08 Phonic Ear Inc. Classroom sound amplification system
US20080170712A1 (en) * 2007-01-16 2008-07-17 Phonic Ear Inc. Sound amplification system
FR2997250A1 (en) * 2012-10-23 2014-04-25 France Telecom DETECTING A PREDETERMINED FREQUENCY BAND IN AUDIO CODE CONTENT BY SUB-BANDS ACCORDING TO PULSE MODULATION TYPE CODING
US20150025897A1 (en) * 2010-04-14 2015-01-22 Huawei Technologies Co., Ltd. System and Method for Audio Coding and Decoding
US9978392B2 (en) * 2016-09-09 2018-05-22 Tata Consultancy Services Limited Noisy signal identification from non-stationary audio signals
EP3364413A4 (en) * 2015-10-13 2019-06-26 Alibaba Group Holding Limited Method of determining noise signal, and method and device for audio noise removal
CN113362813A (en) * 2021-06-30 2021-09-07 北京搜狗科技发展有限公司 Voice recognition method and device and electronic equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8457771B2 (en) * 2009-12-10 2013-06-04 At&T Intellectual Property I, L.P. Automated detection and filtering of audio advertisements

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4222115A (en) * 1978-03-13 1980-09-09 Purdue Research Foundation Spread spectrum apparatus for cellular mobile communication systems
US4461024A (en) * 1980-12-09 1984-07-17 The Secretary Of State For Industry In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Input device for computer speech recognition system
US4827519A (en) * 1985-09-19 1989-05-02 Ricoh Company, Ltd. Voice recognition system using voice power patterns
US5097510A (en) * 1989-11-07 1992-03-17 Gs Systems, Inc. Artificial intelligence pattern-recognition-based noise reduction system for speech processing
US5617508A (en) * 1992-10-05 1997-04-01 Panasonic Technologies Inc. Speech detection device for the detection of speech end points based on variance of frequency band limited energy
US5659622A (en) * 1995-11-13 1997-08-19 Motorola, Inc. Method and apparatus for suppressing noise in a communication system
US5692104A (en) * 1992-12-31 1997-11-25 Apple Computer, Inc. Method and apparatus for detecting end points of speech activity
US5732392A (en) * 1995-09-25 1998-03-24 Nippon Telegraph And Telephone Corporation Method for speech detection in a high-noise environment
US5826230A (en) * 1994-07-18 1998-10-20 Matsushita Electric Industrial Co., Ltd. Speech detection device
US5963901A (en) * 1995-12-12 1999-10-05 Nokia Mobile Phones Ltd. Method and device for voice activity detection and a communication device
US5991718A (en) * 1998-02-27 1999-11-23 At&T Corp. System and method for noise threshold adaptation for voice activity detection in nonstationary noise environments
US6278972B1 (en) * 1999-01-04 2001-08-21 Qualcomm Incorporated System and method for segmentation and recognition of speech signals
US6397050B1 (en) * 1999-04-12 2002-05-28 Rockwell Collins, Inc. Multiband squelch method and apparatus
US6591234B1 (en) * 1999-01-07 2003-07-08 Tellabs Operations, Inc. Method and apparatus for adaptively suppressing noise
US6711536B2 (en) * 1998-10-20 2004-03-23 Canon Kabushiki Kaisha Speech processing apparatus and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4860360A (en) * 1987-04-06 1989-08-22 Gte Laboratories Incorporated Method of evaluating speech
US5323337A (en) * 1992-08-04 1994-06-21 Loral Aerospace Corp. Signal detector employing mean energy and variance of energy content comparison for noise detection
US5579431A (en) 1992-10-05 1996-11-26 Panasonic Technologies, Inc. Speech detection in presence of noise by determining variance over time of frequency band limited energy
US6480823B1 (en) 1998-03-24 2002-11-12 Matsushita Electric Industrial Co., Ltd. Speech detection for noisy conditions
US6349278B1 (en) 1999-08-04 2002-02-19 Ericsson Inc. Soft decision signal estimation

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4222115A (en) * 1978-03-13 1980-09-09 Purdue Research Foundation Spread spectrum apparatus for cellular mobile communication systems
US4461024A (en) * 1980-12-09 1984-07-17 The Secretary Of State For Industry In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Input device for computer speech recognition system
US4827519A (en) * 1985-09-19 1989-05-02 Ricoh Company, Ltd. Voice recognition system using voice power patterns
US5097510A (en) * 1989-11-07 1992-03-17 Gs Systems, Inc. Artificial intelligence pattern-recognition-based noise reduction system for speech processing
US5617508A (en) * 1992-10-05 1997-04-01 Panasonic Technologies Inc. Speech detection device for the detection of speech end points based on variance of frequency band limited energy
US5692104A (en) * 1992-12-31 1997-11-25 Apple Computer, Inc. Method and apparatus for detecting end points of speech activity
US5826230A (en) * 1994-07-18 1998-10-20 Matsushita Electric Industrial Co., Ltd. Speech detection device
US5732392A (en) * 1995-09-25 1998-03-24 Nippon Telegraph And Telephone Corporation Method for speech detection in a high-noise environment
US5659622A (en) * 1995-11-13 1997-08-19 Motorola, Inc. Method and apparatus for suppressing noise in a communication system
US5963901A (en) * 1995-12-12 1999-10-05 Nokia Mobile Phones Ltd. Method and device for voice activity detection and a communication device
US5991718A (en) * 1998-02-27 1999-11-23 At&T Corp. System and method for noise threshold adaptation for voice activity detection in nonstationary noise environments
US6711536B2 (en) * 1998-10-20 2004-03-23 Canon Kabushiki Kaisha Speech processing apparatus and method
US6278972B1 (en) * 1999-01-04 2001-08-21 Qualcomm Incorporated System and method for segmentation and recognition of speech signals
US6591234B1 (en) * 1999-01-07 2003-07-08 Tellabs Operations, Inc. Method and apparatus for adaptively suppressing noise
US6397050B1 (en) * 1999-04-12 2002-05-28 Rockwell Collins, Inc. Multiband squelch method and apparatus

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7302017B2 (en) * 2002-06-18 2007-11-27 General Dynamics C4 Systems, Inc. System and method for adaptive matched filter signal parameter measurement
US20030231719A1 (en) * 2002-06-18 2003-12-18 Wreschner Kenneth Solomon System and method for adaptive matched filter signal parameter measurement
US20050119881A1 (en) * 2003-12-02 2005-06-02 Seidman James L. Method for automatic gain control of encoded digital audio streams
US8005672B2 (en) * 2004-10-08 2011-08-23 Trident Microsystems (Far East) Ltd. Circuit arrangement and method for detecting and improving a speech component in an audio signal
US20060080089A1 (en) * 2004-10-08 2006-04-13 Matthias Vierthaler Circuit arrangement and method for audio signals containing speech
EP1903833A1 (en) * 2006-09-21 2008-03-26 Phonic Ear Incorporated Feedback cancellation in a sound system
US20080085013A1 (en) * 2006-09-21 2008-04-10 Phonic Ear Inc. Feedback cancellation in a sound system
US20080107277A1 (en) * 2006-10-12 2008-05-08 Phonic Ear Inc. Classroom sound amplification system
US20080170712A1 (en) * 2007-01-16 2008-07-17 Phonic Ear Inc. Sound amplification system
US20150025897A1 (en) * 2010-04-14 2015-01-22 Huawei Technologies Co., Ltd. System and Method for Audio Coding and Decoding
US9646616B2 (en) * 2010-04-14 2017-05-09 Huawei Technologies Co., Ltd. System and method for audio coding and decoding
FR2997250A1 (en) * 2012-10-23 2014-04-25 France Telecom DETECTING A PREDETERMINED FREQUENCY BAND IN AUDIO CODE CONTENT BY SUB-BANDS ACCORDING TO PULSE MODULATION TYPE CODING
WO2014064379A1 (en) * 2012-10-23 2014-05-01 Orange Detection of a predefined frequency band in a piece of audio content encoded by subbands according to pulse code modulation encoding
EP3364413A4 (en) * 2015-10-13 2019-06-26 Alibaba Group Holding Limited Method of determining noise signal, and method and device for audio noise removal
US10796713B2 (en) 2015-10-13 2020-10-06 Alibaba Group Holding Limited Identification of noise signal for voice denoising device
US9978392B2 (en) * 2016-09-09 2018-05-22 Tata Consultancy Services Limited Noisy signal identification from non-stationary audio signals
CN113362813A (en) * 2021-06-30 2021-09-07 北京搜狗科技发展有限公司 Voice recognition method and device and electronic equipment

Also Published As

Publication number Publication date
US7299173B2 (en) 2007-11-20
WO2003065352A1 (en) 2003-08-07

Similar Documents

Publication Publication Date Title
CN108831500B (en) Speech enhancement method, device, computer equipment and storage medium
CN108198547B (en) Voice endpoint detection method and device, computer equipment and storage medium
KR100312919B1 (en) Method and apparatus for speaker recognition
EP2089877B1 (en) Voice activity detection system and method
US7756700B2 (en) Perceptual harmonic cepstral coefficients as the front-end for speech recognition
Evangelopoulos et al. Multiband modulation energy tracking for noisy speech detection
US7299173B2 (en) Method and apparatus for speech detection using time-frequency variance
US20030093265A1 (en) Method and system of chinese speech pitch extraction
Zaw et al. The combination of spectral entropy, zero crossing rate, short time energy and linear prediction error for voice activity detection
US6182036B1 (en) Method of extracting features in a voice recognition system
CN108305639B (en) Speech emotion recognition method, computer-readable storage medium and terminal
Athineos et al. LP-TRAP: Linear predictive temporal patterns
Mistry et al. Overview: Speech recognition technology, mel-frequency cepstral coefficients (mfcc), artificial neural network (ann)
Hsu et al. Robust voice activity detection algorithm based on feature of frequency modulation of harmonics and its DSP implementation
CN112216285B (en) Multi-user session detection method, system, mobile terminal and storage medium
US8103512B2 (en) Method and system for aligning windows to extract peak feature from a voice signal
Alam et al. Regularized minimum variance distortionless response-based cepstral features for robust continuous speech recognition
Sorin et al. The ETSI extended distributed speech recognition (DSR) standards: client side processing and tonal language recognition evaluation
Golipour et al. A new approach for phoneme segmentation of speech signals.
Singh et al. A comparative study on feature extraction techniques for language identification
CN114242074A (en) Human voice detection method and device
Verteletskaya et al. Pitch detection algorithms and voiced/unvoiced classification for noisy speech
JPH01255000A (en) Apparatus and method for selectively adding noise to template to be used in voice recognition system
Saha et al. Modified mel-frequency cepstral coefficient
Fan et al. Power-normalized PLP (PNPLP) feature for robust speech recognition

Legal Events

Date Code Title Description
AS Assignment

Owner name: MOTOROLA, INC., ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MA, CHANGXUE;RANDOLPH, MARK;REEL/FRAME:012567/0995

Effective date: 20020130

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MOTOROLA MOBILITY, INC, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA, INC;REEL/FRAME:025673/0558

Effective date: 20100731

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: MOTOROLA MOBILITY LLC, ILLINOIS

Free format text: CHANGE OF NAME;ASSIGNOR:MOTOROLA MOBILITY, INC.;REEL/FRAME:029216/0282

Effective date: 20120622

AS Assignment

Owner name: GOOGLE TECHNOLOGY HOLDINGS LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA MOBILITY LLC;REEL/FRAME:034420/0001

Effective date: 20141028

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20191120

PRDP Patent reinstated due to the acceptance of a late maintenance fee

Effective date: 20201001

FEPP Fee payment procedure

Free format text: SURCHARGE, PETITION TO ACCEPT PYMT AFTER EXP, UNINTENTIONAL (ORIGINAL EVENT CODE: M1558); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PMFG); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PETITION RELATED TO MAINTENANCE FEES FILED (ORIGINAL EVENT CODE: PMFP); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12

STCF Information on status: patent grant

Free format text: PATENTED CASE