[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US11790929B2 - WPE-based dereverberation apparatus using virtual acoustic channel expansion based on deep neural network - Google Patents

WPE-based dereverberation apparatus using virtual acoustic channel expansion based on deep neural network Download PDF

Info

Publication number
US11790929B2
US11790929B2 US17/615,492 US202117615492A US11790929B2 US 11790929 B2 US11790929 B2 US 11790929B2 US 202117615492 A US202117615492 A US 202117615492A US 11790929 B2 US11790929 B2 US 11790929B2
Authority
US
United States
Prior art keywords
wpe
signal
neural network
deep neural
speech signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/615,492
Other versions
US20230178091A1 (en
Inventor
Joon Hyuk CHANG
Joon Young Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industry University Cooperation Foundation IUCF HYU
Original Assignee
Industry University Cooperation Foundation IUCF HYU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industry University Cooperation Foundation IUCF HYU filed Critical Industry University Cooperation Foundation IUCF HYU
Assigned to INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY reassignment INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHANG, JOON HYUK, YANG, JOON YOUNG
Publication of US20230178091A1 publication Critical patent/US20230178091A1/en
Application granted granted Critical
Publication of US11790929B2 publication Critical patent/US11790929B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/02Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02163Only one microphone
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Definitions

  • the present invention relates to a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network (DNN), and more specifically, to a technology using virtual acoustic channel expansion based on a deep neural network so that reverberation can be removed efficiently using a dual-channel WPE algorithm even in a single-channel speech signal environment.
  • DNN deep neural network
  • a microphone for collecting speech signals may receive as input both the speech signal uttered at the current time point and speech signals uttered at the past time points and delayed in time.
  • the input signals to the microphone in the time-domain may be expressed by a convolution operation between impulse responses between a speech source and the microphone.
  • the impulse response at this time is called a room impulse response (RIR).
  • the input signals to the microphone consist broadly of a first component and a second component.
  • the first component is an early-arriving speech component, and refers to a component of a signal for which sound waves are collected in a direct path having no reverberation or a path having relatively little reverberation.
  • the second component is a late reverberation component, that is, a reverberation component, collected through a highly reverberant path.
  • the second component is a component that not only makes a speech signal audibly less pleasing, but also degrades the performance of an applied technology such as speech recognition or speaker recognition that operates by receiving speech signals as input. Accordingly, there is a need for an algorithm for removing such a reverberation component.
  • the weighted prediction error (WPE) algorithm is an algorithm for removing the late reverberation component as described above.
  • the WPE is an algorithm of the type that converts a speech signal in the time domain into that of a frequency domain using the short-time Fourier transform (STFT), and estimates and removes reverberation components at the present time point from speech samples of the past time points using a multi-channel linear prediction (MCLP) technique in the frequency domain.
  • STFT short-time Fourier transform
  • MCLP multi-channel linear prediction
  • the WPE algorithm puts out a single output when a single-channel signal is inputted, and puts out a multi-channel output when a multi-channel speech signal is inputted.
  • reverberation components can be more effectively removed when a multi-channel speech signal collected through a microphone array consisting of a plurality of microphones is given than when only a single-channel speech signal collected through a single microphone is given.
  • the performance of the multi-channel WPE algorithm is better than that of the single-channel WPE algorithm.
  • the biggest difference between the technique disclosed in the prior art literature and the present invention is that the technique of the prior art literature creates a virtual microphone signal through interpolation based on signals collected through two microphones, whereas the method proposed in the present invention generates a virtual channel speech signal in order to use a multi-channel dereverberation algorithm, assuming a situation that a speech signal collected by only one microphone is given.
  • a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network includes a signal reception unit for receiving as input a first speech signal through a single channel microphone, a signal generation unit for generating a second speech signal by applying a virtual acoustic channel expansion algorithm based on a deep neural network to the first speech signal and a dereverberation unit for removing reverberation of the first speech signal and generating a dereverberated signal from which the reverberation has been removed by applying a dual-channel weighted prediction error (WPE) algorithm based on a deep neural network to the first speech signal and the second speech signal.
  • WPE weighted prediction error
  • the signal generation unit may receive a real part and an imaginary part of an STFT coefficient of the first speech signal as input, and outputs a real part and an imaginary part of an STFT coefficient of the second speech signal.
  • the WPE-based dereverberation apparatus may include a power estimation unit for estimating power of the dereverberated signal of the first speech signal, based on the first speech signal and the second speech signal by using a power estimation algorithm based on a deep neural network.
  • the power estimation unit may provide power estimation value of the dereverberated signal to the dereverberation unit, and the dereverberation unit may remove the reverberation included in the first speech signal by using the power estimation value.
  • the WPE-based dereverberation apparatus may learn after the power estimation algorithm of the power estimation unit is subjected to learning to receive the first speech signal containing a reverberation component as input and to estimate the power of the dereverberated signal, the virtual acoustic channel expansion algorithm of the signal generation unit is subjected to learning.
  • the learning of the virtual acoustic channel expansion algorithm of the signal generation unit may comprise a pre-training stage and a fine-tuning stage, and the pre-training stage is carried out by performing a self-regression task that allows the virtual acoustic channel expansion algorithm to estimate the same real part and imaginary part as an inputted signal.
  • the fine-tuning stage may learn the virtual acoustic channel expansion algorithm is subjected to learning so that an output signal derived by passing a virtual channel speech signal and an actually observed speech signal through dual-channel WPE approaches an early-arriving signal.
  • the power estimation algorithm may not be subjected to learning during the pre-training stage and the fine-tuning stage.
  • the virtual acoustic channel expansion algorithm may include a U-Net architecture using a gated linear unit (GLU) instead of a general convolution operation.
  • GLU gated linear unit
  • the virtual acoustic channel expansion algorithm may perform a 2D convolution operation with a stride of (2, 2) without performing max-pooling when down-sampling a feature map.
  • FIG. 1 is a diagram showing a method of inputting a speech signal using a single-channel microphone in a reverberant environment in accordance with an embodiment of the present invention.
  • FIG. 2 is a diagram showing a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention.
  • FIG. 3 is a diagram showing the structure of a deep neural network for virtual acoustic channel expansion (VACE) in accordance with an embodiment of the present invention.
  • VACE virtual acoustic channel expansion
  • FIG. 4 is a diagram showing a detailed structure of a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention
  • FIG. 5 is a table showing the comparison between the performance of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention and the performance of various dereverberation algorithms.
  • FIG. 6 is a diagram illustrating spectrograms of input and output signals of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention.
  • the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of the present invention has an effect of being able to obtain excellent performance by using a dual-channel WPE through virtual acoustic channel expansion without using a single-channel WPE and without increasing the number of microphones in order to remove reverberation components when only a speech signal collected by one microphone is given.
  • the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of the present invention proposes a method of solving the problem from an algorithmic point of view instead of increasing the number of microphones, and has an effect of being able to dramatically reduce the cost required to install additional microphones.
  • FIG. 1 is a diagram showing a method of inputting a speech signal using a single-channel microphone in a reverberant environment in accordance with an embodiment of the present invention.
  • a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network can effectively remove reverberation by applying only speech signals inputted through a single-channel microphone MIC to dual-channel WPE using virtual acoustic channel expansion based on a deep neural network.
  • the WPE described herein may refer to a weighted prediction error based on a deep neural network.
  • the present invention will be described by assuming a scenario in which speech signals are collected using a single-channel microphone MIC in a noiseless reverberant enclosure for the convenience of description, but the principle of the present invention can be likewise extended and applied even in an environment in which background noise is present.
  • speech signals generated from an utterance source SPK may be inputted to a single-channel microphone MIC.
  • the time point of reaching the single-channel microphone MIC may vary depending on the path of their sound waves.
  • a speech signal is composed of the sum of a first component and a second component.
  • the first component may be a component (i.e., an early-arriving signal) inputted through a direct path or a path with less severe reverberation from the utterance source SPK to the single-channel microphone MIC
  • the second component may be a component (i.e., a reverberant signal) inputted through a highly reverberant path.
  • X t,f X t,f (1) +X t,f (2) (Eq. 1)
  • X denotes a speech signal in the short-time Fourier transform (STFT) domain
  • t and f denote a time-frame index and a frequency index of the short-time Fourier transform coefficient.
  • the first term representing the first component corresponds to a region formed by cutting from the start point of a room impulse response (RIR) to the point 50 ms after the main peak, and is assumed to be calculated through a convolution operation between the truncated RIR and a source speech.
  • the first term may be considered as an ideal speech signal in which the second component (i.e., the reverberation component) is completely removed and only the first component remains.
  • reverberation components inputted into the single-channel microphone MIC significantly reduces the accuracy of speech and sound signal processing processes such as speech recognition, direction estimation, speech modeling, and location estimation, effectively removing reverberation components is always an essential element in the field of speech signal processing.
  • speech signals are inputted using a single-channel microphone, they can be applied to a dual-channel WPE by using virtual acoustic channel expansion based on a deep neural network.
  • a method of estimating reverberation time based on multi-channel microphones using a deep neural network in accordance with an embodiment can utilize the relative spatial information between input signals for estimation, by using the multi-channel microphones, as well as can estimate the degree of reverberation by modeling the nonlinear distribution between feature vectors that can well represent the reverberation characteristics of a space using a deep neural network, which is a deep structure-based machine learning technique.
  • the virtual acoustic channel expansion technique is applied based on the WPE algorithm for dereverberation, but the present invention is not limited thereto.
  • the virtual acoustic channel expansion method may also be applied to a multi-channel denoising algorithm such as a parametric multi-channel Wiener filter (PMWF) for denoising.
  • PMWF parametric multi-channel Wiener filter
  • FIG. 2 is a diagram showing a WPE-based dereverberation apparatus 10 using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention.
  • the WPE-based dereverberation apparatus 10 in accordance with an embodiment of the present invention is relevant to all research areas using a dereverberation algorithm as a pre-processing module when using a speech application by using a single microphone in a highly reverberant space such as a room or lecture hall.
  • a dereverberation algorithm as a pre-processing module
  • it may be applied to artificial intelligence speakers, robots, portable terminals, and so on that use speech applications in an environment where reverberation exists, so as to improve the performance of the technology (speech recognition, speaker recognition, etc.) implemented by the application.
  • the WPE-based dereverberation apparatus 10 in accordance with an embodiment of the present invention is applicable to an application for the purpose of performing speech recognition or speaker recognition by using artificial intelligence speakers, robots, portable terminals, and so on, and is more effective when only one microphone has to be used to reduce costs structurally, in particular.
  • the WPE-based dereverberation apparatus 10 using virtual acoustic channel expansion based on a deep neural network may include a signal reception unit 100 , a signal generation unit 200 , a power estimation unit 300 , and a dereverberation unit 400 .
  • the signal reception unit 100 is a constituent corresponding to the single-channel microphone MIC shown in FIG. 1 , and may receive a first speech signal AS 1 from the single-channel microphone MIC.
  • the first speech signal AS 1 is a signal inputted from the utterance source SPK shown in FIG. 1 and includes a reverberation component.
  • the signal generation unit 200 may generate a second speech signal AS 2 by applying virtual acoustic channel expansion (VACE) based on a deep neural network to the first speech signal AS 1 .
  • the second speech signal AS 2 may be a virtual channel speech signal.
  • a virtual acoustic channel expansion method based on a deep neural network of the signal generation unit 200 will be described in more detail with reference to FIG. 3 .
  • the power estimation unit 300 can estimate the power of a dereverberated signal of the first speech signal AS 1 based on the first speech signal AS 1 and the second speech signal AS 2 by using a power estimation algorithm based on a deep neural network.
  • the power estimation unit 300 may provide a power estimation value of the dereverberated signal NRS to the dereverberation unit 400 .
  • the dereverberation unit 400 may remove the reverberation (e.g., late reverberation) of the first speech signal AS 1 by applying the dual-channel WPE based on a deep neural network to the first speech signal AS 1 and the second speech signal AS 2 . And, the dereverberation unit 400 may output the dereverberated signal NRS from which the reverberation has been removed. The dereverberation unit 400 may remove the reverberation included in the first speech signal AS 1 by using the power estimation value.
  • reverberation e.g., late reverberation
  • the power estimation unit 300 and the dereverberation unit 400 may constitute a dual-channel WPE system.
  • the power estimation unit 300 and the dereverberation unit 400 may be integrated and implemented integrally.
  • Classical WPE dereverberation techniques use a linear prediction filter to estimate the reverberation component of an input signal, and subtract the reverberation component estimated through the linear prediction from the input signal, thereby calculating an estimated value of maximum likelihood (ML) of a signal from which the reverberation has been removed. Since a closed form solution for estimating such a linear prediction filter does not exist, it is necessary to estimate the coefficients of the filter in an iterative manner, and the process can be expressed as the following equations.
  • Z d,t,f denotes an estimated value of an early-arriving signal estimated through the linear prediction technique
  • ⁇ t,f denotes the power at the time-frequency bin (t, f) of the early-arriving signal estimated
  • K denotes the order of the linear prediction filter.
  • denotes the delay of the linear prediction algorithm
  • ⁇ tilde over (X) ⁇ t- ⁇ ,f and G denote a stacked representation obtained by stacking the STFT coefficient of the input signal to the microphone and the coefficient of the linear prediction filter, respectively, from the ⁇ th frame in the past to the ( ⁇ +K ⁇ 1) th frame in the past based on the current frame t.
  • the WPE dereverberation method utilizing a deep neural network in accordance with an embodiment of the present invention replaces a part of the classical WPE algorithm described above with a logic utilizing a deep neural network.
  • the part for estimating the power of the early-arriving signal in Eq. 6 is replaced with a deep neural network.
  • the deep neural network may be subjected to learning to receive the power of the input signal to the microphone and to estimate the power of Z d,t,f from which the reverberation component has been removed. This is a method of subjecting the deep neural network to learning for the purpose of removing the reverberation component from both the speech component and the noise component.
  • a power estimation value of the early-arriving signal for each channel may be calculated using the deep neural network, and then the average may be taken for all channels, to calculate the power estimation value that can replace ⁇ t, f , the left-hand side of Eq. 2. Thereafter, the STFT coefficient of the early reflection signal may be estimated through the processes of Eq. 3 to Eq. 6.
  • the deep neural network for estimating the power of the early-arriving signal may be subjected to learning to minimize the mean squared error (MSE) between the estimated power of the early-arriving signal and the power of the correct early-arriving signal.
  • MSE mean squared error
  • the LPS converted into log-scale by taking the log of the power is used as the actual input/output, and when applied to the WPE algorithm, it can be applied after converting back to the linear-scale through an exponential operation.
  • FIG. 3 is a diagram showing the structure of a deep neural network for virtual acoustic channel expansion (VACE) in accordance with an embodiment of the present invention.
  • the signal generation unit 200 shown in FIG. 2 may include the deep neural network for virtual acoustic channel expansion shown in FIG. 3 .
  • the deep neural network for virtual acoustic channel expansion may receive as input the real part and the imaginary part of the STFT coefficient of a given first speech signal, and output the real part and the imaginary part of the STFT coefficient of a second speech signal.
  • U-Net Convolutional Networks for Biomedical Image Segmentation
  • U-Net basically consists of a convolutional encoder-decoder structure, and is characterized by an operation that concatenates the feature map of the encoder and the feature map of the decoder.
  • a gated linear unit was used instead of a general convolution operation, and in this case, a general convolution operation was used instead of a GLU in the convolution operation serving as down-sampling and up-sampling.
  • a 1 ⁇ 1 convolution operation was used in the bottleneck part of the network.
  • a separate decoder stream was used to estimate the real part and the imaginary part of the STFT coefficient of the second speech signal in the decoding path.
  • the deep neural network for virtual acoustic channel expansion of the present invention may use a loss function of the form as below for learning.
  • L 1 freq ( A,B ) MSE( A r +B r )+MSE( A i +B i )+ ⁇ *MSE(ln
  • L 1 time ( a,b ) MAE( a,b ) (Eq. 8)
  • L 1 ( A,B ) L 1 freq ( A,B )+ ⁇ * L 1 time ( a,b ) (Eq. 9)
  • a and B denote the STFT coefficients
  • the superscripts r and i denote the real and imaginary parts of the STFT coefficients
  • denote magnitude spectra
  • a and b denote time-domain signals obtained by taking the inverse STFT of A and B.
  • ⁇ and ⁇ are scaling factors for matching the scale between the loss functions defined in the different domains representing signals.
  • MSE denotes the mean square error
  • MAE denotes the mean absolute error.
  • FIG. 4 is a diagram showing a detailed structure of a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention. Although a single-channel WPE system is shown together as a comparative example for the convenience of description, the present invention is not limited thereto.
  • VACENet virtual acoustic channel expansion
  • the VACENet in order to start learning of the deep neural network of virtual acoustic channel expansion (hereinafter, VACENet), the VACENet must first be integrated into a WPE system based on a pre-learned deep neural network.
  • the VACENet may correspond to the signal generation unit 200 shown in FIG. 1
  • the WPE system may correspond to the power estimation unit 300 and the dereverberation unit 400 .
  • FIG. 4 shows a block diagram of a VACE-WPE system in which the VACENet and the dual-channel WPE are integrated.
  • lowercase letters all denote time-domain speech signals
  • uppercase letters all denote STFT domain speech signals.
  • x 1 and x v denote a given single-channel speech signal and a virtual channel speech signal generated through VACENet, respectively.
  • Z 1 and Z v denote the STFT coefficients of dereverberated signals obtained by passing X 1 and X 2 through the dual-channel WPE, respectively.
  • Z 0 denotes the STFT coefficient of a dereverberated signal obtained by passing the given single-channel speech signal through a single-channel WPE serving as a comparative example.
  • X 1 (1) denotes an ideal early-arriving signal from which reverberation has been completely removed, which is intended to be ultimately obtained through the WPE algorithm.
  • a WPE algorithm based on a deep neural network needs to be prepared first.
  • a neural network i.e., LPSNet
  • LPS log-scale power spectra
  • the LPSNet is subjected to learning to receive the LPS of a signal containing reverberation as input and to estimate the LPS of the early-arriving signal.
  • VACENet neural network
  • the learning stage of the VACENet may be divided into two stages: pre-training and fine-tuning.
  • pre-training is that if the VACENet is randomly initialized, virtual channel speech signals are generated randomly, and thus, it becomes ineffective as an input to the WPE.
  • the VACENet is basically configured to receive the real and imaginary parts of STFT coefficients that can be obtained by taking the short-time Fourier transform (STFT) on the observed single-channel speech signal, and to output the RI component of the virtual channel speech signal.
  • STFT short-time Fourier transform
  • the VACENet is subjected to learning to simply estimate the same real and imaginary parts as the input, and this is done under the assumption that the actually observed dual-channel signals will not differ much from each other.
  • the pre-training process can be performed independently regardless of the deep neural network WPE.
  • the VACENet is integrated with the neural WPE to construct the VACE-WPE system proposed in the present invention (i.e., a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network).
  • the VACENet is subjected to learning so that the output signal, derived by passing the generated virtual channel speech signal along with the actually observed single-channel speech signal through the dual-channel WPE, approaches the early-arriving signal.
  • the types of loss functions used in the pre-training and fine-tuning processes are defined in Eq. 7 to Eq. 9.
  • the signal generation unit 200 may generate a virtual channel speech signal through VACENet by using a given single-channel speech signal.
  • the power estimation unit 300 may include a deep neural network used to estimate the power of the main signal of the speech signal.
  • the dereverberation unit 400 may remove reverberation from a dual-channel speech signal consisting of a single-channel signal and a generated virtual channel signal by using the dual-channel WPE algorithm.
  • the pre-learning of the VACENet must be performed in priority. This is because if the learning is proceeded immediately in a randomly initialized state without performing the pre-learning of the VACENet, virtual channel signals may also be randomly generated, and thus, the WPE algorithm may not work properly.
  • Pre-training of the VACENet may be carried out by performing a self-regression task to put out the same signal also at the output terminal as the single-channel speech signal used at the input terminal of the VACENet.
  • the VACENet may be subjected to learning to minimize L 1 (X v , X 1 ) in the pre-learning stage. In other words, the VACENet may be subjected to learning so that X v approaches X 1 .
  • VACENet virtual acoustic channel expansion deep neural network
  • the VACENet After the pre-learning of the VACENet is completed in a manner of restoring an inputted signal as it is, it may be subjected to learning so that the dual-channel WPE can put out an output signal closer to the dereverberated signal than the single-channel WPE by actually fine-tuning the VACENet.
  • the deep neural network of the power estimation unit 300 serving to estimate the power of an early-arriving signal out of the components of the deep neural network-based WPE is not subjected to learning together during the learning process of the VACENet, and may only be used to estimate the power of the early-arriving signal, with the learning stopped.
  • the parameters of the VACENet may be subjected to learning in the direction of minimizing L 1 (Z 1 , X 1 (1) ). That is, the learning is performed for the purpose of bringing the output signal of the dual-channel WPE closer to the ideal early-arriving signal.
  • FIG. 5 is a table showing the comparison between the performance of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention and the performance of various dereverberation algorithms.
  • FIG. 6 is a diagram illustrating spectrograms of input and output signals of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention.
  • the TIMIT DB is a public DB frequently used in dereverberation or denoising experiments.
  • all utterances having a duration of 2.8 seconds or less were removed from the entire DB, and a small quantity of the remaining utterances was separated and used as a validation set.
  • This simulated RIR DB contains RIRs generated through simulations in a small room, a medium room, and a large room.
  • the training RIR set consisted of 16,200 medium room RIRs and 5,400 large room RIRs
  • the validation RIR set consisted of 1,800 medium room RIRs and 600 large room RIRs.
  • the RIR DB used to construct an evaluation set for comparing and evaluating the WPE-based dereverberation apparatus proposed in the present invention is real RIRs provided by REVERB Challenge 2014, and is not artificially generated but actually recorded RIRs, unlike the simulated RIRs used for learning.
  • the corresponding RIR set includes eight (8) RIRs for each of the small, medium, and large rooms.
  • the reverberation time RT60 for each room is about 0.25, 0.5, and 0.7 seconds, respectively.
  • Each RIR consists of a total of 8 channels, but only the first channel was used in this experiment.
  • speech samples contaminated with reverberation had a sampling frequency of 16 kHz, and were converted into STFT domain signals by using a window size and a hop size of 64 ms and 16 ms.
  • the FFT size used at this time was 1,024, and accordingly, 513-dimensional log-scale power spectra (LPS) were used as an input to the LPSNet, and a feature obtained by stacking the real and imaginary parts of the 513-dimensional STFT coefficients on the channel axis was used as an input to the VACENet.
  • LPS log-scale power spectra
  • the structure proposed in the above paper is of a structure that first applies the 2D convolution (Conv2D) operation and the max pooling operation to the input feature several times, and then stacks dilated 1D convolution (Conv1D) blocks in a plurality and processes them.
  • Conv2D 2D convolution
  • Conv1D dilated 1D convolution
  • the kernel size of the Conv2D was reduced from (9, 9) to (5, 5) in that structure, and the number of channels was reduced from (32, 64) to (24, 48), respectively.
  • the number of dilated Conv1D blocks was increased from 2 to 4 for use.
  • the input LPS feature was normalized through a learnable batch normalization layer.
  • Batch normalization was applied to the input feature likewise in the VACENet as well, and at this time, batch normalization was applied separately to the real part and the imaginary part, separately.
  • the delay A of the linear prediction filter in the WPE algorithm was set to 3
  • the number of taps K was set to 20.
  • An on-the-fly data generator was used when configuring a mini-batch for model learning.
  • This method uses a method in which one piece of clean speech data is selected randomly from a given speech dataset for learning, one RIR is selected randomly from an RIR dataset, and then arbitrary reverberated speech is created through the convolution of these two signals, and reverberated speech utterances generated randomly in this way were bundled in the unit of four to be used as one mini-batch.
  • the speech data is cropped in an arbitrary interval so as to have a length of 2.8 seconds before convolution.
  • One training epoch for deep neural network learning was defined as an iteration for 6,000 mini-batches.
  • the learning rate was reduced to half each time the validation loss failed to exhibit the lowest value twice in a row. Further, dropout and gradient clipping played an important role in normalizing and stabilizing learning, and at this time, the dropout rate was set to 0.3 and the global norm value for gradient clipping was set to 3.0.
  • the number of filter taps K of the WPE algorithm was set to 10 only in the learning stage through the experiments. This is because if the number of taps of the linear prediction filter is not reduced as described above, a loss value that is too small is generated from the initial stage of training, and thus learning does not proceed properly.
  • the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network proposed in the present invention was compared with and evaluated against a single-channel WPE using only actually observed single-channel speech and a dual-channel WPE using actual dual-channel speech.
  • the number of filter taps for the single-channel WPE was set to 60, and the number of filter taps for the actual dual-channel WPE was set to 20.
  • the actual second channel signal was generated using the RIR of the 5th channel facing the 1st channel out of REVERB Challenge 2014 RIRs of a total of 8 channels.
  • the dereverberation performance of each algorithm was evaluated through perceptual evaluation of speech quality (PESQ), the cepstrum distance (CD), log-likelihood ratio (LLR), or the non-intrusive signal-to-reverberation modulation energy ratio (SRMR).
  • PESQ perceptual evaluation of speech quality
  • CD cepstrum distance
  • LLR log-likelihood ratio
  • SRMR non-intrusive signal-to-reverberation modulation energy ratio
  • FIG. 5 there is shown a comparison between the performance of the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention and the performance of other dereverberation algorithms.
  • the second row shows the evaluation results for the single-channel WPE output signal, which is denoted by z 0 .
  • the third and fourth rows show the performance for the actual channel output and virtual channel output of the VACE-WPE algorithm proposed in the present invention, which are denoted by z 1 and z v , respectively.
  • the first row shows the performance for the reverberated speech signal x 1 without using a dereverberation algorithm
  • the last row shows the performance for the first channel output signal (actual) of the dual-channel WPE algorithm using the actual dual-channel speech signal.
  • the proposed VACE-WPE method z 1 shows better performance than the conventional single-channel WPE z 0 , which means that it is possible to generate, via the deep neural network, a virtual channel speech signal that is effective as a second channel input of a dual-channel WPE through the dereverberation apparatus proposed by the present invention.
  • the dual-channel WPE algorithm that has removed the reverberation through the actual dual-channel speech still shows somewhat better performance than the VACE-WPE method that has removed the reverberation through the virtual channel speech signal.
  • the virtual channel speech signal z v generated through the proposed method exhibits completely different characteristics from the rest of the signals. In terms of only the performance measured through the evaluation metric, it can be observed that the virtual channel speech signal z v shows the worst performance, and the performance difference is very large.
  • FIG. 6 is a diagram illustrating spectrograms of input and output signals of a WPE-based dereverberation apparatus in accordance with an embodiment of the present invention in a large room environment.
  • the virtual channel speech signal x v generated exhibits a completely different spectral pattern from the actually observed speech signal x 1 , and it can be observed that z 1 obtained by passing this through the WPE also exhibits completely different characteristics from z v .
  • the reverberation component of the output signal z 1 of the WPE corresponding to the actually observed speech signal x 1 channel has reduced in the overall frequency range compared to the actual observed speech signal x 1 .
  • the unprocessed signal i.e., the signal containing reverberation
  • the signal containing reverberation showed the lowest CD value, and this is because the small room acoustics is an unfamiliar environment that has not been taken into account during learning.
  • the number of taps of the linear prediction filter has a relatively large value for use in a small room environment.
  • the single-channel and dual-channel WPE algorithms used filter taps of 60 and 20, respectively, which are somewhat too high values for use in a small room environment where the reverberation component was not that high, the reverberation component was estimated and removed excessively when such numbers of filter taps were used, thereby removing even the speech components that should not have been removed, which might have highly likely caused distortion in the output signal.
  • the VACENet of the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention is subjected to learning to generate a virtual auxiliary speech signal that allows better removal of reverberation components through MIMO (multi-input multi-output) WPE algorithms by simply using an observed single-channel speech signal, and the virtual channel signal generated does not seem to have any microphone array characteristics.
  • MIMO multi-input multi-output
  • the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention has the potential for development through the generation of virtual channels, and at the same time, a neural network may more accurately calculate an early-arriving speech signal or a late reverberation signal through MCLP algorithms.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

According to an aspect, a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network includes a signal reception unit for receiving as input a first speech signal through a single channel microphone, a signal generation unit for generating a second speech signal by applying a virtual acoustic channel expansion algorithm based on a deep neural network to the first speech signal and a dereverberation unit for removing reverberation of the first speech signal and generating a dereverberated signal from which the reverberation has been removed by applying a dual-channel weighted prediction error (WPE) algorithm based on a deep neural network to the first speech signal and the second speech signal.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application is a National Stage of International Application No. PCT/KR2021/010308 filed Aug. 4, 2021, claiming priority based on Korean Patent Application No. 10-2020-0097584 filed Aug. 4, 2020.
TECHNICAL FIELD
The present invention relates to a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network (DNN), and more specifically, to a technology using virtual acoustic channel expansion based on a deep neural network so that reverberation can be removed efficiently using a dual-channel WPE algorithm even in a single-channel speech signal environment.
BACKGROUND
If a speech signal is uttered in a reverberant enclosure, the sound waves are reflected by walls, ceilings, obstacles, or the like. Therefore, a microphone for collecting speech signals may receive as input both the speech signal uttered at the current time point and speech signals uttered at the past time points and delayed in time.
In this case, the input signals to the microphone in the time-domain may be expressed by a convolution operation between impulse responses between a speech source and the microphone. The impulse response at this time is called a room impulse response (RIR).
The input signals to the microphone consist broadly of a first component and a second component. The first component is an early-arriving speech component, and refers to a component of a signal for which sound waves are collected in a direct path having no reverberation or a path having relatively little reverberation. The second component is a late reverberation component, that is, a reverberation component, collected through a highly reverberant path.
Here, the second component (reverberation component) is a component that not only makes a speech signal audibly less pleasing, but also degrades the performance of an applied technology such as speech recognition or speaker recognition that operates by receiving speech signals as input. Accordingly, there is a need for an algorithm for removing such a reverberation component.
The weighted prediction error (WPE) algorithm is an algorithm for removing the late reverberation component as described above.
Specifically, the WPE is an algorithm of the type that converts a speech signal in the time domain into that of a frequency domain using the short-time Fourier transform (STFT), and estimates and removes reverberation components at the present time point from speech samples of the past time points using a multi-channel linear prediction (MCLP) technique in the frequency domain.
In general, the WPE algorithm puts out a single output when a single-channel signal is inputted, and puts out a multi-channel output when a multi-channel speech signal is inputted.
In this case, reverberation components can be more effectively removed when a multi-channel speech signal collected through a microphone array consisting of a plurality of microphones is given than when only a single-channel speech signal collected through a single microphone is given. In other words, it can be said that the performance of the multi-channel WPE algorithm is better than that of the single-channel WPE algorithm.
On the other hand, in the prior art literature “Virtually increasing microphone array elements by interpolation in complex-logarithmic domain,” H. Katahira, N. Ono, S. Miyabe, T. Yamada, and S. Makino, EUSIPCO, 2013,” when signals collected through two microphones are given, a technique has been proposed for creating a new virtual microphone signal by interpolating these two signals in the complex-logarithmic domain. Specifically, the technique described above used a method of applying interpolation, respectively, to magnitude and phase calculated from STFT coefficients. The prior art literature has shown that noises can be effectively removed by applying a plurality of virtual microphone signals generated to a specific type of beamformer.
The biggest difference between the technique disclosed in the prior art literature and the present invention is that the technique of the prior art literature creates a virtual microphone signal through interpolation based on signals collected through two microphones, whereas the method proposed in the present invention generates a virtual channel speech signal in order to use a multi-channel dereverberation algorithm, assuming a situation that a speech signal collected by only one microphone is given.
SUMMARY OF INVENTION Technical Objects
It is an object of the present invention to provide a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network that can use a dual-channel WPE algorithm when only a single-channel speech signal is given, in order to effectively remove reverberation components from the single-channel speech signal collected in a reverberant enclosure.
It is another object of the present invention to provide a WPE-based dereverberation apparatus that can generate a virtual acoustic channel speech signal using a deep neural network and can remove reverberation components with better performance than a single-channel WPE through a dual-channel WPE algorithm.
According to an aspect, a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network includes a signal reception unit for receiving as input a first speech signal through a single channel microphone, a signal generation unit for generating a second speech signal by applying a virtual acoustic channel expansion algorithm based on a deep neural network to the first speech signal and a dereverberation unit for removing reverberation of the first speech signal and generating a dereverberated signal from which the reverberation has been removed by applying a dual-channel weighted prediction error (WPE) algorithm based on a deep neural network to the first speech signal and the second speech signal.
The signal generation unit may receive a real part and an imaginary part of an STFT coefficient of the first speech signal as input, and outputs a real part and an imaginary part of an STFT coefficient of the second speech signal.
The WPE-based dereverberation apparatus may include a power estimation unit for estimating power of the dereverberated signal of the first speech signal, based on the first speech signal and the second speech signal by using a power estimation algorithm based on a deep neural network.
The power estimation unit may provide power estimation value of the dereverberated signal to the dereverberation unit, and the dereverberation unit may remove the reverberation included in the first speech signal by using the power estimation value.
The WPE-based dereverberation apparatus may learn after the power estimation algorithm of the power estimation unit is subjected to learning to receive the first speech signal containing a reverberation component as input and to estimate the power of the dereverberated signal, the virtual acoustic channel expansion algorithm of the signal generation unit is subjected to learning.
The learning of the virtual acoustic channel expansion algorithm of the signal generation unit may comprise a pre-training stage and a fine-tuning stage, and the pre-training stage is carried out by performing a self-regression task that allows the virtual acoustic channel expansion algorithm to estimate the same real part and imaginary part as an inputted signal.
The fine-tuning stage may learn the virtual acoustic channel expansion algorithm is subjected to learning so that an output signal derived by passing a virtual channel speech signal and an actually observed speech signal through dual-channel WPE approaches an early-arriving signal.
The power estimation algorithm may not be subjected to learning during the pre-training stage and the fine-tuning stage.
The virtual acoustic channel expansion algorithm may include a U-Net architecture using a gated linear unit (GLU) instead of a general convolution operation.
The virtual acoustic channel expansion algorithm may perform a 2D convolution operation with a stride of (2, 2) without performing max-pooling when down-sampling a feature map.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram showing a method of inputting a speech signal using a single-channel microphone in a reverberant environment in accordance with an embodiment of the present invention.
FIG. 2 is a diagram showing a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention.
FIG. 3 is a diagram showing the structure of a deep neural network for virtual acoustic channel expansion (VACE) in accordance with an embodiment of the present invention.
FIG. 4 is a diagram showing a detailed structure of a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention
FIG. 5 is a table showing the comparison between the performance of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention and the performance of various dereverberation algorithms.
FIG. 6 is a diagram illustrating spectrograms of input and output signals of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention.
EFFECTS OF THE INVENTION
The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of the present invention has an effect of being able to obtain excellent performance by using a dual-channel WPE through virtual acoustic channel expansion without using a single-channel WPE and without increasing the number of microphones in order to remove reverberation components when only a speech signal collected by one microphone is given.
In addition, the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of the present invention proposes a method of solving the problem from an algorithmic point of view instead of increasing the number of microphones, and has an effect of being able to dramatically reduce the cost required to install additional microphones.
DETAILED DESCRIPTION OF EMBODIMENTS
Configurations illustrated in the embodiments and the drawings described in the present specification are only the preferred embodiments of the present disclosure, and thus it is to be understood that various modified examples, which may replace the embodiments and the drawings described in the present specification, are possible when filing the present application.
The terms used in the present specification are used to describe the embodiments of the present disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention is provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise.
It will be understood that when the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, figures, steps, components, or combination thereof, but do not preclude the presence or addition of one or more other features, figures, steps, components, members, or combinations thereof.
It will be understood that, although the terms first, second, etc. may be used herein to describe various components, these components should not be limited by these terms.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that the present disclosure may be readily implemented by those skilled in the art. In the drawings, parts irrelevant to the description are omitted for the simplicity of explanation.
FIG. 1 is a diagram showing a method of inputting a speech signal using a single-channel microphone in a reverberant environment in accordance with an embodiment of the present invention.
Referring to FIG. 1 , a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network can effectively remove reverberation by applying only speech signals inputted through a single-channel microphone MIC to dual-channel WPE using virtual acoustic channel expansion based on a deep neural network. The WPE described herein may refer to a weighted prediction error based on a deep neural network.
The present invention will be described by assuming a scenario in which speech signals are collected using a single-channel microphone MIC in a noiseless reverberant enclosure for the convenience of description, but the principle of the present invention can be likewise extended and applied even in an environment in which background noise is present.
As shown in FIG. 1 , speech signals generated from an utterance source SPK may be inputted to a single-channel microphone MIC. In this case, even for the speech signals generated at the same time, the time point of reaching the single-channel microphone MIC may vary depending on the path of their sound waves.
Accordingly, it is assumed herein that a speech signal is composed of the sum of a first component and a second component. The first component may be a component (i.e., an early-arriving signal) inputted through a direct path or a path with less severe reverberation from the utterance source SPK to the single-channel microphone MIC, and the second component may be a component (i.e., a reverberant signal) inputted through a highly reverberant path.
X t,f =X t,f (1) +X t,f (2)  (Eq. 1)
In Eq. 1, X denotes a speech signal in the short-time Fourier transform (STFT) domain, and t and f denote a time-frame index and a frequency index of the short-time Fourier transform coefficient.
At this time, the first term representing the first component corresponds to a region formed by cutting from the start point of a room impulse response (RIR) to the point 50 ms after the main peak, and is assumed to be calculated through a convolution operation between the truncated RIR and a source speech. The first term may be considered as an ideal speech signal in which the second component (i.e., the reverberation component) is completely removed and only the first component remains.
Since reverberation components inputted into the single-channel microphone MIC significantly reduces the accuracy of speech and sound signal processing processes such as speech recognition, direction estimation, speech modeling, and location estimation, effectively removing reverberation components is always an essential element in the field of speech signal processing.
Hence, if speech signals are inputted using a single-channel microphone, they can be applied to a dual-channel WPE by using virtual acoustic channel expansion based on a deep neural network.
In other words, a method of estimating reverberation time based on multi-channel microphones using a deep neural network in accordance with an embodiment can utilize the relative spatial information between input signals for estimation, by using the multi-channel microphones, as well as can estimate the degree of reverberation by modeling the nonlinear distribution between feature vectors that can well represent the reverberation characteristics of a space using a deep neural network, which is a deep structure-based machine learning technique.
In the present invention, such virtual acoustic channel expansion technique is applied based on the WPE algorithm for dereverberation, but the present invention is not limited thereto. According to an embodiment, the virtual acoustic channel expansion method may also be applied to a multi-channel denoising algorithm such as a parametric multi-channel Wiener filter (PMWF) for denoising.
According to an embodiment, even when using multi-channel microphones of a structure employed in various environments such as mobile devices and IoT devices, it is possible to provide a dereverberation method that compensates for the shortcomings and improves robustness in accuracy and noise by using the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of the present invention.
FIG. 2 is a diagram showing a WPE-based dereverberation apparatus 10 using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention.
The WPE-based dereverberation apparatus 10 in accordance with an embodiment of the present invention is relevant to all research areas using a dereverberation algorithm as a pre-processing module when using a speech application by using a single microphone in a highly reverberant space such as a room or lecture hall. In particular, it may be applied to artificial intelligence speakers, robots, portable terminals, and so on that use speech applications in an environment where reverberation exists, so as to improve the performance of the technology (speech recognition, speaker recognition, etc.) implemented by the application.
In addition, the WPE-based dereverberation apparatus 10 in accordance with an embodiment of the present invention is applicable to an application for the purpose of performing speech recognition or speaker recognition by using artificial intelligence speakers, robots, portable terminals, and so on, and is more effective when only one microphone has to be used to reduce costs structurally, in particular.
Referring to FIG. 2 , the WPE-based dereverberation apparatus 10 using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention may include a signal reception unit 100, a signal generation unit 200, a power estimation unit 300, and a dereverberation unit 400.
The signal reception unit 100 is a constituent corresponding to the single-channel microphone MIC shown in FIG. 1 , and may receive a first speech signal AS1 from the single-channel microphone MIC. In this case, the first speech signal AS1 is a signal inputted from the utterance source SPK shown in FIG. 1 and includes a reverberation component.
The signal generation unit 200 may generate a second speech signal AS2 by applying virtual acoustic channel expansion (VACE) based on a deep neural network to the first speech signal AS1. The second speech signal AS2 may be a virtual channel speech signal. A virtual acoustic channel expansion method based on a deep neural network of the signal generation unit 200 will be described in more detail with reference to FIG. 3 .
The power estimation unit 300 can estimate the power of a dereverberated signal of the first speech signal AS1 based on the first speech signal AS1 and the second speech signal AS2 by using a power estimation algorithm based on a deep neural network. The power estimation unit 300 may provide a power estimation value of the dereverberated signal NRS to the dereverberation unit 400.
The dereverberation unit 400 may remove the reverberation (e.g., late reverberation) of the first speech signal AS1 by applying the dual-channel WPE based on a deep neural network to the first speech signal AS1 and the second speech signal AS2. And, the dereverberation unit 400 may output the dereverberated signal NRS from which the reverberation has been removed. The dereverberation unit 400 may remove the reverberation included in the first speech signal AS1 by using the power estimation value.
For example, the power estimation unit 300 and the dereverberation unit 400 may constitute a dual-channel WPE system. According to an embodiment, the power estimation unit 300 and the dereverberation unit 400 may be integrated and implemented integrally.
In the following, a WPE-based dereverberation method utilizing a deep neural network will be described.
Classical WPE dereverberation techniques use a linear prediction filter to estimate the reverberation component of an input signal, and subtract the reverberation component estimated through the linear prediction from the input signal, thereby calculating an estimated value of maximum likelihood (ML) of a signal from which the reverberation has been removed. Since a closed form solution for estimating such a linear prediction filter does not exist, it is necessary to estimate the coefficients of the filter in an iterative manner, and the process can be expressed as the following equations.
λ t , f = 1 D d "\[LeftBracketingBar]" Z d , t , f "\[RightBracketingBar]" 2 ( Eq . 2 ) R f = t X ~ t - Δ , f X ~ t - Δ , f λ t , f C DK DK ( Eq . 3 ) P f = t X ~ t - Δ , f X ~ t , f λ t , f C DK D ( Eq . 4 ) G f = R f - 1 P f C DK D ( Eq . 5 ) Z t , f = X t , f - G f H X ~ t - Δ , f ( Eq . 6 )
Here, Zd,t,f denotes an estimated value of an early-arriving signal estimated through the linear prediction technique, λt,f denotes the power at the time-frequency bin (t, f) of the early-arriving signal estimated, and K denotes the order of the linear prediction filter.
Δ denotes the delay of the linear prediction algorithm, and {tilde over (X)}t-Δ,f and G denote a stacked representation obtained by stacking the STFT coefficient of the input signal to the microphone and the coefficient of the linear prediction filter, respectively, from the Δth frame in the past to the (Δ+K−1)th frame in the past based on the current frame t.
On the other hand, the WPE dereverberation method utilizing a deep neural network in accordance with an embodiment of the present invention replaces a part of the classical WPE algorithm described above with a logic utilizing a deep neural network.
More specifically, the part for estimating the power of the early-arriving signal in Eq. 6 is replaced with a deep neural network. In this case, the deep neural network may be subjected to learning to receive the power of the input signal to the microphone and to estimate the power of Zd,t,f from which the reverberation component has been removed. This is a method of subjecting the deep neural network to learning for the purpose of removing the reverberation component from both the speech component and the noise component.
Once the learning of the deep neural network is completed, a power estimation value of the early-arriving signal for each channel may be calculated using the deep neural network, and then the average may be taken for all channels, to calculate the power estimation value that can replace λt, f, the left-hand side of Eq. 2. Thereafter, the STFT coefficient of the early reflection signal may be estimated through the processes of Eq. 3 to Eq. 6.
The deep neural network for estimating the power of the early-arriving signal may be subjected to learning to minimize the mean squared error (MSE) between the estimated power of the early-arriving signal and the power of the correct early-arriving signal. At this time, the LPS converted into log-scale by taking the log of the power is used as the actual input/output, and when applied to the WPE algorithm, it can be applied after converting back to the linear-scale through an exponential operation.
FIG. 3 is a diagram showing the structure of a deep neural network for virtual acoustic channel expansion (VACE) in accordance with an embodiment of the present invention. In this case, the signal generation unit 200 shown in FIG. 2 may include the deep neural network for virtual acoustic channel expansion shown in FIG. 3 .
Referring to FIG. 3 , the deep neural network for virtual acoustic channel expansion may receive as input the real part and the imaginary part of the STFT coefficient of a given first speech signal, and output the real part and the imaginary part of the STFT coefficient of a second speech signal.
In the present invention, the U-Net (U-Net: Convolutional Networks for Biomedical Image Segmentation) architecture was adopted for the virtual acoustic channel expansion, and some settings were changed and applied. U-Net basically consists of a convolutional encoder-decoder structure, and is characterized by an operation that concatenates the feature map of the encoder and the feature map of the decoder.
The settings of the U-Net architecture changed in the present invention are as follows.
When down-sampling the feature map, a 2D convolution operation with a stride of (2, 2) was used without max-pooling.
A gated linear unit (GLU) was used instead of a general convolution operation, and in this case, a general convolution operation was used instead of a GLU in the convolution operation serving as down-sampling and up-sampling.
A 1×1 convolution operation was used in the bottleneck part of the network.
A separate decoder stream was used to estimate the real part and the imaginary part of the STFT coefficient of the second speech signal in the decoding path.
Except for the above changes, a model was constructed to be the same as the existing U-Net architecture. In FIG. 3 , wide arrows all denote 2D convolution operations in which the kernel size is 3.
The deep neural network for virtual acoustic channel expansion of the present invention may use a loss function of the form as below for learning.
L 1 freq(A,B)=MSE(A r +B r)+MSE(A i +B i)+α*MSE(ln|A|,ln|B|)   (Eq. 7)
L 1 time(a,b)=MAE(a,b)  (Eq. 8)
L 1(A,B)=L 1 freq(A,B)+β*L 1 time(a,b)  (Eq. 9)
In this case, A and B denote the STFT coefficients, the superscripts r and i denote the real and imaginary parts of the STFT coefficients, |A| and |B| denote magnitude spectra, and a and b denote time-domain signals obtained by taking the inverse STFT of A and B. α and β are scaling factors for matching the scale between the loss functions defined in the different domains representing signals. MSE denotes the mean square error, and MAE denotes the mean absolute error.
FIG. 4 is a diagram showing a detailed structure of a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention. Although a single-channel WPE system is shown together as a comparative example for the convenience of description, the present invention is not limited thereto.
Referring to FIG. 4 , in order to start learning of the deep neural network of virtual acoustic channel expansion (hereinafter, VACENet), the VACENet must first be integrated into a WPE system based on a pre-learned deep neural network. In this case, the VACENet may correspond to the signal generation unit 200 shown in FIG. 1 , and the WPE system may correspond to the power estimation unit 300 and the dereverberation unit 400.
FIG. 4 shows a block diagram of a VACE-WPE system in which the VACENet and the dual-channel WPE are integrated.
In FIG. 4 , lowercase letters all denote time-domain speech signals, and uppercase letters all denote STFT domain speech signals.
x1 and xv denote a given single-channel speech signal and a virtual channel speech signal generated through VACENet, respectively.
Z1 and Zv denote the STFT coefficients of dereverberated signals obtained by passing X1 and X2 through the dual-channel WPE, respectively.
Z0 denotes the STFT coefficient of a dereverberated signal obtained by passing the given single-channel speech signal through a single-channel WPE serving as a comparative example.
X1 (1) denotes an ideal early-arriving signal from which reverberation has been completely removed, which is intended to be ultimately obtained through the WPE algorithm.
In order to construct the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention, a WPE algorithm based on a deep neural network needs to be prepared first. In particular, since the WPE is a fixed algorithm, a neural network (i.e., LPSNet) for estimating the log-scale power spectra (LPS) of the early-arriving speech signal required as a component of the deep neural network-based WPE must be learned in advance. The LPSNet is subjected to learning to receive the LPS of a signal containing reverberation as input and to estimate the LPS of the early-arriving signal.
Next, it is necessary to subject the neural network (i.e., VACENet), which serves to receive a given single-channel speech signal as input and to generate a virtual channel speech signal, to learning. The learning stage of the VACENet may be divided into two stages: pre-training and fine-tuning. The reason for having to perform pre-training is that if the VACENet is randomly initialized, virtual channel speech signals are generated randomly, and thus, it becomes ineffective as an input to the WPE.
The VACENet is basically configured to receive the real and imaginary parts of STFT coefficients that can be obtained by taking the short-time Fourier transform (STFT) on the observed single-channel speech signal, and to output the RI component of the virtual channel speech signal.
In the pre-training stage, the VACENet is subjected to learning to simply estimate the same real and imaginary parts as the input, and this is done under the assumption that the actually observed dual-channel signals will not differ much from each other. The pre-training process can be performed independently regardless of the deep neural network WPE.
After the pre-training process is completed, in order to proceed with fine-tuning, the VACENet is integrated with the neural WPE to construct the VACE-WPE system proposed in the present invention (i.e., a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network).
In the fine-tuning stage, the VACENet is subjected to learning so that the output signal, derived by passing the generated virtual channel speech signal along with the actually observed single-channel speech signal through the dual-channel WPE, approaches the early-arriving signal. The types of loss functions used in the pre-training and fine-tuning processes are defined in Eq. 7 to Eq. 9.
In a brief description of the structure of the VACE-WPE system proposed in the present invention through a block diagram in relation to what is shown in FIG. 1 , the signal generation unit 200 may generate a virtual channel speech signal through VACENet by using a given single-channel speech signal.
The power estimation unit 300 may include a deep neural network used to estimate the power of the main signal of the speech signal.
The dereverberation unit 400 may remove reverberation from a dual-channel speech signal consisting of a single-channel signal and a generated virtual channel signal by using the dual-channel WPE algorithm.
However, after configuring the structure as described above, in order to perform learning so that the dual-channel WPE puts out an output signal that exhibits better dereverberation performance than the single-channel WPE, the pre-learning of the VACENet must be performed in priority. This is because if the learning is proceeded immediately in a randomly initialized state without performing the pre-learning of the VACENet, virtual channel signals may also be randomly generated, and thus, the WPE algorithm may not work properly.
Pre-training of the VACENet may be carried out by performing a self-regression task to put out the same signal also at the output terminal as the single-channel speech signal used at the input terminal of the VACENet.
The reason is that it is assumed that if dual-channel speech signals are given by actually using a dual-channel microphone rather than a virtual channel, there will be no significant difference between the two signals. In fact, the VACENet may be subjected to learning to minimize L1(Xv, X1) in the pre-learning stage. In other words, the VACENet may be subjected to learning so that Xv approaches X1.
In the following, a method of fine-tuning the virtual acoustic channel expansion deep neural network (VACENet) will be described.
After the pre-learning of the VACENet is completed in a manner of restoring an inputted signal as it is, it may be subjected to learning so that the dual-channel WPE can put out an output signal closer to the dereverberated signal than the single-channel WPE by actually fine-tuning the VACENet.
At this time, the deep neural network of the power estimation unit 300 (hereinafter, LPSNet) serving to estimate the power of an early-arriving signal out of the components of the deep neural network-based WPE is not subjected to learning together during the learning process of the VACENet, and may only be used to estimate the power of the early-arriving signal, with the learning stopped.
In the fine-tuning stage, the parameters of the VACENet may be subjected to learning in the direction of minimizing L1(Z1, X1 (1)). That is, the learning is performed for the purpose of bringing the output signal of the dual-channel WPE closer to the ideal early-arriving signal.
FIG. 5 is a table showing the comparison between the performance of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention and the performance of various dereverberation algorithms. FIG. 6 is a diagram illustrating spectrograms of input and output signals of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention.
In the following, a performance experiment method and experiment results of the WPE-based dereverberation apparatus in accordance with an embodiment of the present invention will be described with reference to FIGS. 5 and 6 .
First, all experiments were conducted using the TIMIT English speech database DB. The TIMIT DB is a public DB frequently used in dereverberation or denoising experiments. For the experiment, first, all utterances having a duration of 2.8 seconds or less were removed from the entire DB, and a small quantity of the remaining utterances was separated and used as a validation set. As a result, it was possible to obtain 3,023 utterances from 462 speakers as a training set, 458 utterances as the validation set, and 1,344 utterances from 168 speakers as a test set.
In order to subject the VACENet to learning, since it is necessary to contaminate the clean speech signal utterances of the TIMIT DB with reverberation and then to proceed with learning to remove it, a room impulse response set (RIR set) to be used for contaminating with reverberation is needed. In the experiments of the present invention, a simulated RIR set frequently used in speech recognition and speaker recognition scripts of the Kaldi toolkit was used. This is also public data and is a known DB used by many researchers.
This simulated RIR DB contains RIRs generated through simulations in a small room, a medium room, and a large room.
However, in the experiments of the present invention, first, the small room RIR was not used in the experiments. The training RIR set consisted of 16,200 medium room RIRs and 5,400 large room RIRs, and the validation RIR set consisted of 1,800 medium room RIRs and 600 large room RIRs.
The RIR DB used to construct an evaluation set for comparing and evaluating the WPE-based dereverberation apparatus proposed in the present invention is real RIRs provided by REVERB Challenge 2014, and is not artificially generated but actually recorded RIRs, unlike the simulated RIRs used for learning. The corresponding RIR set includes eight (8) RIRs for each of the small, medium, and large rooms. The reverberation time RT60 for each room is about 0.25, 0.5, and 0.7 seconds, respectively. Each RIR consists of a total of 8 channels, but only the first channel was used in this experiment.
First, speech samples contaminated with reverberation had a sampling frequency of 16 kHz, and were converted into STFT domain signals by using a window size and a hop size of 64 ms and 16 ms. The FFT size used at this time was 1,024, and accordingly, 513-dimensional log-scale power spectra (LPS) were used as an input to the LPSNet, and a feature obtained by stacking the real and imaginary parts of the 513-dimensional STFT coefficients on the channel axis was used as an input to the VACENet.
A structure proposed previously in another research paper was adopted for the structure of the LPSNet. The paper is “Monaural speech enhancement with dilated convolutions,” S. Pirhosseinloo and J. S. Brumberg, in Proc. Interspeech, 2019.
The structure proposed in the above paper is of a structure that first applies the 2D convolution (Conv2D) operation and the max pooling operation to the input feature several times, and then stacks dilated 1D convolution (Conv1D) blocks in a plurality and processes them.
In the present invention, the kernel size of the Conv2D was reduced from (9, 9) to (5, 5) in that structure, and the number of channels was reduced from (32, 64) to (24, 48), respectively. In addition, the number of dilated Conv1D blocks was increased from 2 to 4 for use.
The input LPS feature was normalized through a learnable batch normalization layer.
Batch normalization was applied to the input feature likewise in the VACENet as well, and at this time, batch normalization was applied separately to the real part and the imaginary part, separately.
On the other hand, the real and imaginary parts outputted were normalized at the output terminal of the VACENet using pre-calculated global mean and variance statistics.
After each Conv2D or transposed Conv2D operation in the structure of VACENet, batch normalization and an exponential linear unit (ELU) activation function were used.
In the final VACE-WPE system, the delay A of the linear prediction filter in the WPE algorithm was set to 3, and the number of taps K was set to 20.
An on-the-fly data generator was used when configuring a mini-batch for model learning. This method uses a method in which one piece of clean speech data is selected randomly from a given speech dataset for learning, one RIR is selected randomly from an RIR dataset, and then arbitrary reverberated speech is created through the convolution of these two signals, and reverberated speech utterances generated randomly in this way were bundled in the unit of four to be used as one mini-batch.
At this time, the speech data is cropped in an arbitrary interval so as to have a length of 2.8 seconds before convolution. One training epoch for deep neural network learning was defined as an iteration for 6,000 mini-batches.
The values of α=0.3 and β=20 were used in Eq. 7 and Eq. 9, and these values were determined by checking out the loss values in the early stage of learning. All the deep neural networks were subjected to learning through Adam optimizer, and 10{circumflex over ( )}(−4) was used as the initial value of the learning rate in the pre-training stage of the LPSNet and VACENet, and 5*10{circumflex over ( )}(−5) was used as the initial value of the learning rate in the fine-tuning stage of the VACENet.
The learning rate was reduced to half each time the validation loss failed to exhibit the lowest value twice in a row. Further, dropout and gradient clipping played an important role in normalizing and stabilizing learning, and at this time, the dropout rate was set to 0.3 and the global norm value for gradient clipping was set to 3.0.
All the weights of the deep neural network were normalized by a scaling factor of 10{circumflex over ( )}-5.
Finally, the number of filter taps K of the WPE algorithm was set to 10 only in the learning stage through the experiments. This is because if the number of taps of the linear prediction filter is not reduced as described above, a loss value that is too small is generated from the initial stage of training, and thus learning does not proceed properly.
The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network proposed in the present invention was compared with and evaluated against a single-channel WPE using only actually observed single-channel speech and a dual-channel WPE using actual dual-channel speech. The number of filter taps for the single-channel WPE was set to 60, and the number of filter taps for the actual dual-channel WPE was set to 20. Further, here, the actual second channel signal was generated using the RIR of the 5th channel facing the 1st channel out of REVERB Challenge 2014 RIRs of a total of 8 channels.
The dereverberation performance of each algorithm was evaluated through perceptual evaluation of speech quality (PESQ), the cepstrum distance (CD), log-likelihood ratio (LLR), or the non-intrusive signal-to-reverberation modulation energy ratio (SRMR).
An early-arriving signal of an actually observed signal was used as the reference signal used when calculating the above evaluation metric.
Referring to FIG. 5 , there is shown a comparison between the performance of the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention and the performance of other dereverberation algorithms.
The second row shows the evaluation results for the single-channel WPE output signal, which is denoted by z0. The third and fourth rows show the performance for the actual channel output and virtual channel output of the VACE-WPE algorithm proposed in the present invention, which are denoted by z1 and zv, respectively.
The first row shows the performance for the reverberated speech signal x1 without using a dereverberation algorithm, and the last row shows the performance for the first channel output signal (actual) of the dual-channel WPE algorithm using the actual dual-channel speech signal.
First, in an analysis of the performance in the medium room and the large room, it can be seen that through the comparison of the first three rows of the table, the performance is improved when using the WPE algorithm over the cases where no processing has been done.
Moreover, it can be seen that the proposed VACE-WPE method z1 shows better performance than the conventional single-channel WPE z0, which means that it is possible to generate, via the deep neural network, a virtual channel speech signal that is effective as a second channel input of a dual-channel WPE through the dereverberation apparatus proposed by the present invention.
However, the dual-channel WPE algorithm that has removed the reverberation through the actual dual-channel speech still shows somewhat better performance than the VACE-WPE method that has removed the reverberation through the virtual channel speech signal.
On the other hand, it can be appreciated that the virtual channel speech signal zv generated through the proposed method exhibits completely different characteristics from the rest of the signals. In terms of only the performance measured through the evaluation metric, it can be observed that the virtual channel speech signal zv shows the worst performance, and the performance difference is very large.
FIG. 6 is a diagram illustrating spectrograms of input and output signals of a WPE-based dereverberation apparatus in accordance with an embodiment of the present invention in a large room environment.
Referring to FIG. 6 , it can be seen that the virtual channel speech signal xv generated exhibits a completely different spectral pattern from the actually observed speech signal x1, and it can be observed that z1 obtained by passing this through the WPE also exhibits completely different characteristics from zv.
Nevertheless, it can be confirmed that the reverberation component of the output signal z1 of the WPE corresponding to the actually observed speech signal x1 channel has reduced in the overall frequency range compared to the actual observed speech signal x1.
Next, in a survey of the performance in the small room, it can be seen that the overall performance is much better than the performance in the medium and large rooms.
In addition, it can be seen that the performance of different WPE algorithms is similar to each other in terms of PESQ, LLR, and SRMR. However, zv, the virtual channel output signal of the WPE, showed completely different characteristics from the rest of the signals as in the medium and large rooms.
On the other hand, in the small room, the unprocessed signal, i.e., the signal containing reverberation, showed the lowest CD value, and this is because the small room acoustics is an unfamiliar environment that has not been taken into account during learning.
Also, it may occur because of the fact that the number of taps of the linear prediction filter has a relatively large value for use in a small room environment. This is because while the single-channel and dual-channel WPE algorithms used filter taps of 60 and 20, respectively, which are somewhat too high values for use in a small room environment where the reverberation component was not that high, the reverberation component was estimated and removed excessively when such numbers of filter taps were used, thereby removing even the speech components that should not have been removed, which might have highly likely caused distortion in the output signal.
To summarize the results, the VACENet of the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention is subjected to learning to generate a virtual auxiliary speech signal that allows better removal of reverberation components through MIMO (multi-input multi-output) WPE algorithms by simply using an observed single-channel speech signal, and the virtual channel signal generated does not seem to have any microphone array characteristics.
The reason the actual channel output signal of the VACE-WPE shows better dereverberation performance than the output signal of the single-channel WPE even though the virtual channel input/output signal of the VACE-WPE exhibits very different characteristics from the actual signal as described above is that since the dual-channel WPE operates based on multi-channel linear prediction (MCLP), it can be said that the underlying algorithm itself is fundamentally different from the single-channel WPE.
Therefore, the WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network in accordance with an embodiment of the present invention has the potential for development through the generation of virtual channels, and at the same time, a neural network may more accurately calculate an early-arriving speech signal or a late reverberation signal through MCLP algorithms.
On the other hand, the constitutional elements, units, modules, components, and the like stated as “˜part or portion” in the present invention may be implemented together or individually as logic devices interoperable while being individual. Descriptions of different features of modules, units or the like are intended to emphasize functional embodiments different from each other and do not necessarily mean that the embodiments should be realized by individual hardware or software components. Rather, the functions related to one or more modules or units may be performed by individual hardware or software components or integrated in common or individual hardware or software components.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
Additionally, the logic flows and structure block diagrams described in this patent document, which describe particular methods and/or corresponding acts in support of steps and corresponding functions in support of disclosed structural means, may also be utilized to implement corresponding software structures and algorithms, and equivalents thereof.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
This written description sets forth the best mode of the present invention and provides examples to describe the present invention and to enable a person of ordinary skill in the art to make and use the present invention. This written description does not limit the present invention to the specific terms set forth.
While the present invention has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in forms and details may be made therein without departing from the spirit and scope of the present invention as defined by the appended claims and their equivalents. Therefore, the technical scope of the present invention may be determined by on the technical scope of the accompanying claims.

Claims (10)

What is claimed is:
1. A WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network, comprising:
a signal reception unit for receiving as input a first speech signal through a single channel microphone;
a signal generation unit for generating a second speech signal by applying a virtual acoustic channel expansion algorithm based on a deep neural network to the first speech signal; and
a dereverberation unit for removing reverberation of the first speech signal and generating a dereverberated signal from which the reverberation has been removed by applying a dual-channel weighted prediction error (WPE) algorithm based on a deep neural network to the first speech signal and the second speech signal.
2. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 1, wherein the signal generation unit receives a real part and an imaginary part of an STFT coefficient of the first speech signal as input, and outputs a real part and an imaginary part of an STFT coefficient of the second speech signal.
3. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 2, further comprising:
a power estimation unit for estimating power of the dereverberated signal of the first speech signal, based on the first speech signal and the second speech signal by using a power estimation algorithm based on a deep neural network.
4. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 3, wherein the power estimation unit provides a power estimation value of the dereverberated signal to the dereverberation unit, and
the dereverberation unit removes the reverberation included in the first speech signal by using the power estimation value.
5. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 4, wherein after the power estimation algorithm of the power estimation unit is subjected to learning to receive the first speech signal containing a reverberation component as input and to estimate the power of the dereverberated signal, the virtual acoustic channel expansion algorithm of the signal generation unit is subjected to learning.
6. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 5, wherein the learning of the virtual acoustic channel expansion algorithm of the signal generation unit comprises a pre-training stage and a fine-tuning stage, and
the pre-training stage is carried out by performing a self-regression task that allows the virtual acoustic channel expansion algorithm to estimate the same real part and imaginary part as an inputted signal.
7. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 6, wherein in the fine-tuning stage, the virtual acoustic channel expansion algorithm is subjected to learning so that an output signal derived by passing a virtual channel speech signal and an actually observed speech signal through dual-channel WPE approaches an early-arriving signal.
8. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 7, wherein the power estimation algorithm is not subjected to learning during the pre-training stage and the fine-tuning stage.
9. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 1, wherein the virtual acoustic channel expansion algorithm has a U-Net architecture using a gated linear unit (GLU) instead of a general convolution operation.
10. The WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network of claim 9, wherein the virtual acoustic channel expansion algorithm performs a 2D convolution operation with a stride of (2, 2) without performing max-pooling when down-sampling a feature map.
US17/615,492 2020-08-04 2021-08-04 WPE-based dereverberation apparatus using virtual acoustic channel expansion based on deep neural network Active 2042-01-01 US11790929B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR1020200097584A KR102316627B1 (en) 2020-08-04 2020-08-04 Device for speech dereverberation based on weighted prediction error using virtual acoustic channel expansion based on deep neural networks
KR10-2020-0097584 2020-08-04
PCT/KR2021/010308 WO2022031061A1 (en) 2020-08-04 2021-08-04 Wpe-based reverberation removal apparatus using deep neural network-based virtual channel extension

Publications (2)

Publication Number Publication Date
US20230178091A1 US20230178091A1 (en) 2023-06-08
US11790929B2 true US11790929B2 (en) 2023-10-17

Family

ID=78275695

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/615,492 Active 2042-01-01 US11790929B2 (en) 2020-08-04 2021-08-04 WPE-based dereverberation apparatus using virtual acoustic channel expansion based on deep neural network

Country Status (3)

Country Link
US (1) US11790929B2 (en)
KR (1) KR102316627B1 (en)
WO (1) WO2022031061A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102437697B1 (en) 2022-05-24 2022-08-29 삼성지투비 주식회사 System of supervising illegal parking vehicle capable of tracking managing the vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100636167B1 (en) 2004-09-02 2006-10-19 삼성전자주식회사 Wireless audio system using virtual sound algorithm
KR101334991B1 (en) 2012-06-25 2013-12-02 서강대학교산학협력단 Method of dereverberating of single channel speech and speech recognition apparutus using the method
KR20160073874A (en) 2014-12-17 2016-06-27 서울대학교산학협력단 Voice activity detection method based on statistical model employing deep neural network and voice activity detection device performing the same
KR101704926B1 (en) 2015-10-23 2017-02-23 한양대학교 산학협력단 Statistical Model-based Voice Activity Detection with Ensemble of Deep Neural Network Using Acoustic Environment Classification and Voice Activity Detection Method thereof
US20180350379A1 (en) * 2017-06-02 2018-12-06 Apple Inc. Multi-Channel Speech Signal Enhancement for Robust Voice Trigger Detection and Automatic Speech Recognition
US10283140B1 (en) * 2018-01-12 2019-05-07 Alibaba Group Holding Limited Enhancing audio signals using sub-band deep neural networks
US10490204B2 (en) * 2017-02-21 2019-11-26 Intel IP Corporation Method and system of acoustic dereverberation factoring the actual non-ideal acoustic environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100636167B1 (en) 2004-09-02 2006-10-19 삼성전자주식회사 Wireless audio system using virtual sound algorithm
KR101334991B1 (en) 2012-06-25 2013-12-02 서강대학교산학협력단 Method of dereverberating of single channel speech and speech recognition apparutus using the method
KR20160073874A (en) 2014-12-17 2016-06-27 서울대학교산학협력단 Voice activity detection method based on statistical model employing deep neural network and voice activity detection device performing the same
KR101704926B1 (en) 2015-10-23 2017-02-23 한양대학교 산학협력단 Statistical Model-based Voice Activity Detection with Ensemble of Deep Neural Network Using Acoustic Environment Classification and Voice Activity Detection Method thereof
US10490204B2 (en) * 2017-02-21 2019-11-26 Intel IP Corporation Method and system of acoustic dereverberation factoring the actual non-ideal acoustic environment
US20180350379A1 (en) * 2017-06-02 2018-12-06 Apple Inc. Multi-Channel Speech Signal Enhancement for Robust Voice Trigger Detection and Automatic Speech Recognition
US10283140B1 (en) * 2018-01-12 2019-05-07 Alibaba Group Holding Limited Enhancing audio signals using sub-band deep neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
K. Kinoshita, M. Delcroix, H. Kwon, T. Mori, and T. Nakatani, ‘Neural Network-Based Spectrum Estimation for Online WPE Dereverberation’, in Interspeech, 2017, pp. 384-388. (Year: 2017). *

Also Published As

Publication number Publication date
WO2022031061A1 (en) 2022-02-10
KR102316627B1 (en) 2021-10-22
US20230178091A1 (en) 2023-06-08

Similar Documents

Publication Publication Date Title
Tan et al. Real-time speech enhancement using an efficient convolutional recurrent network for dual-microphone mobile phones in close-talk scenarios
Li et al. An overview of noise-robust automatic speech recognition
Delcroix et al. Strategies for distant speech recognitionin reverberant environments
Xiao et al. Speech dereverberation for enhancement and recognition using dynamic features constrained deep neural networks and feature adaptation
Drude et al. Integrating Neural Network Based Beamforming and Weighted Prediction Error Dereverberation.
Yamamoto et al. Enhanced robot speech recognition based on microphone array source separation and missing feature theory
Cord-Landwehr et al. Monaural source separation: From anechoic to reverberant environments
Krueger et al. Model-based feature enhancement for reverberant speech recognition
Nakatani et al. Dominance based integration of spatial and spectral features for speech enhancement
Mohammadiha et al. Speech dereverberation using non-negative convolutive transfer function and spectro-temporal modeling
Wang et al. Dereverberation and denoising based on generalized spectral subtraction by multi-channel LMS algorithm using a small-scale microphone array
Nesta et al. A flexible spatial blind source extraction framework for robust speech recognition in noisy environments
Nesta et al. Blind source extraction for robust speech recognition in multisource noisy environments
Zhang et al. End-to-end dereverberation, beamforming, and speech recognition in a cocktail party
Wisdom et al. Enhancement and recognition of reverberant and noisy speech by extending its coherence
Yamaoka et al. CNN-based virtual microphone signal estimation for MPDR beamforming in underdetermined situations
Song et al. An integrated multi-channel approach for joint noise reduction and dereverberation
Zhang et al. Distant-talking speaker identification by generalized spectral subtraction-based dereverberation and its efficient computation
Astudillo et al. Integration of beamforming and uncertainty-of-observation techniques for robust ASR in multi-source environments
US11790929B2 (en) WPE-based dereverberation apparatus using virtual acoustic channel expansion based on deep neural network
Fan et al. A regression approach to binaural speech segregation via deep neural network
US20230306980A1 (en) Method and System for Audio Signal Enhancement with Reduced Latency
Nakatani et al. Simultaneous denoising, dereverberation, and source separation using a unified convolutional beamformer
Purushothaman et al. 3-D acoustic modeling for far-field multi-channel speech recognition
Inoue et al. Sepnet: a deep separation matrix prediction network for multichannel audio source separation

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

AS Assignment

Owner name: INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHANG, JOON HYUK;YANG, JOON YOUNG;REEL/FRAME:058319/0977

Effective date: 20211101

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE