[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109841227B - Background noise removing method based on learning compensation - Google Patents

Background noise removing method based on learning compensation Download PDF

Info

Publication number
CN109841227B
CN109841227B CN201910182463.7A CN201910182463A CN109841227B CN 109841227 B CN109841227 B CN 109841227B CN 201910182463 A CN201910182463 A CN 201910182463A CN 109841227 B CN109841227 B CN 109841227B
Authority
CN
China
Prior art keywords
background noise
conference
noise
signal
collected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910182463.7A
Other languages
Chinese (zh)
Other versions
CN109841227A (en
Inventor
张晖
高财政
赵海涛
孙雁飞
朱洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910182463.7A priority Critical patent/CN109841227B/en
Publication of CN109841227A publication Critical patent/CN109841227A/en
Application granted granted Critical
Publication of CN109841227B publication Critical patent/CN109841227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention discloses a background noise removing method based on learning compensation, which comprises the following steps: step (1): dividing the conference scene background noise data set into small conference background noise, medium conference background noise and large conference background noise according to the conference scale; step (2): the background noise estimation specifically includes: step (2.1): learning the characteristics of the background noise by adopting a GMM model, and respectively obtaining background noise distribution of the background noise of the small conference, the background noise of the medium conference and the background noise of the large conference; step (2.2): identifying the background noise of which scale the collected voice signal belongs to through the GMM, and finally selecting the background noise distribution of the corresponding scale according to the identification result; and (3): and according to the estimated background noise distribution, compensating the acquired voice signals by adopting a noise learning compensation algorithm, and removing the background noise in the acquired voice signals. The invention has the advantage of effectively removing background noise.

Description

Background noise removing method based on learning compensation
Technical Field
The invention relates to the field of intelligent conferences, in particular to a background noise removal method based on learning compensation.
Background
The background noise in the conference scene is difficult to draw the distribution rule and characteristics of the background noise due to time variability, instability and complexity, and is even impossible to be classified into a specific class, and moreover, the difference between the background noise is very large in the conference scenes with different scales, which brings great difficulty for removing the conference background noise. Therefore, it is desirable to invent a method for effectively removing the conference background noise in the voice signal.
Disclosure of Invention
The invention aims to provide a background noise removing method based on learning compensation, which can effectively remove background noise in a voice signal.
In order to achieve the purpose, the invention adopts the following technical scheme: a background noise removing method based on learning compensation comprises the following steps:
step (1): scene-based noise classification: according to the conference scale, dividing the conference scene background noise data set into small conference background noise, medium conference background noise and large conference background noise;
step (2): the background noise estimation method specifically comprises the following steps:
step (2.1): learning the characteristics of the background noise by adopting a GMM model, and respectively obtaining background noise distribution of the background noise of the small conference, the background noise of the medium conference and the background noise of the large conference;
step (2.2): identifying the background noise of which scale the collected voice signal belongs to through the GMM, and finally selecting the background noise distribution of the corresponding scale according to the identification result;
and (3): and according to the background noise distribution estimated by the collected voice signals, compensating the collected voice signals by adopting a noise learning compensation algorithm, thereby removing the background noise in the collected voice signals.
Further, the foregoing background noise removing method based on learning compensation, wherein: in step (1), the scene-based noise classification specifically includes: the method comprises the steps of firstly screening representative partial samples which are uniform in background noise distribution and easy to extract from conference scene background noise data in a centralized mode, then dividing the samples into small conference background noise, medium conference background noise and large conference background noise according to the conference scale, then respectively carrying out data cleaning on the classified background noise, then separating background noise signals from sample voice data, splicing the noise signals into a plurality of noise files with the same time length, and finally carrying out manual marking on the noise files to finish classification.
Further, the foregoing background noise removing method based on learning compensation, wherein: in the step (3), in the noise learning compensation algorithm, the specific calculation formula of the speaker signal in the collected speech signal is as follows:
Figure BDA0001990608560000021
wherein y (t) is the collected voice signal, x (t) is the speech signal, n (t) is the estimated background noise of the voice signal collected in step (2), and k is the adjustment parameter, which is an experimental value.
Through the implementation of the technical scheme, the invention has the beneficial effects that: the background noise in the voice signal can be effectively removed.
Drawings
Fig. 1 is a schematic flow chart of a background noise removing method based on learning compensation according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
As shown in fig. 1, the background noise removing method based on learning compensation includes the following steps:
step (1): scene-based noise classification: firstly, screening out representative partial samples which are uniform in background noise distribution and easy to extract from a conference scene background noise data set, then dividing the samples into small conference background noise, medium conference background noise and large conference background noise according to the conference scale, then respectively carrying out data cleaning on the classified background noise, then separating out background noise signals from sample voice data, splicing the noise signals into a plurality of noise files with consistent time length, and finally carrying out manual marking on the noise files, so that the conference scene background noise data set is divided into the small conference background noise, the medium conference background noise and the large conference background noise;
step (2): the background noise estimation method specifically comprises the following steps:
step (2.1): the characteristics of the background noise are learned by adopting a GMM model, the background noise distributions of the background noise of the small conference, the background noise of the medium conference and the background noise of the large conference are respectively obtained, the distributions describe the characteristics and the rules of the background noise of the corresponding conference, and the amplitude corresponding to the background noise signal at a certain moment can be predicted through the distributions;
step (2.2): identifying the background noise of which scale the collected voice signal belongs to through the GMM, and finally selecting the background noise distribution of the corresponding scale according to the identification result, wherein the selected background noise distribution is the background noise estimation result of the collected voice signal;
and (3): according to the background noise distribution estimated by the collected voice signals, compensating the collected voice signals by adopting a noise learning compensation algorithm so as to remove the background noise in the collected voice signals;
in the noise learning compensation algorithm, the speech signal collected by the collecting device is composed of a speaker signal and a background noise signal, and the relationship is shown in formula 1.1, wherein y (t) is the collected speech signal, x (t) is the speaker signal, n (t) is the background noise estimated in step 2, and w is a self-adaptive background noise adjusting parameter; the previous compensation algorithms do not consider adaptive adjustment, and generally directly obtain a speaker signal from y (t) -n (t), which may bring the result of too much compensation or too little compensation, which directly results in that the background noise is not removed completely or part of the signal associated with the speaker is also removed, and in order to improve the situation, the invention designs adaptive adjustment of background noise parameters;
y(t)=x(t)+n(t)·w (1.1)
the research shows that: n (t) estimated by background noise is a signal irrelevant to y (t), the characteristics of y (t) distribution are not considered, the signal can only represent the distribution situation of conference scene noise under the average condition, the acquisition equipment is placed at different positions of a conference scene, and the amplitude values of the acquired background noise are different, so that on the basis, the solving process of w is designed to be shown in a formula (1.2), the selection of w fully considers the time domain distribution of k moments before the t moment, the compensation is carried out according to the principle that the larger the amplitude value is, the more the noise compensation is, and the background noise parameters under different environments can be well adaptively adjusted;
Figure BDA0001990608560000031
therefore, the specific calculation formula of the speaker signal in the collected voice signal is as follows:
Figure BDA0001990608560000032
wherein, y (t) is the collected voice signal, x (t) is the speech signal, n (t) is the background noise estimated from the voice signal collected in the step (2), k is the adjusting parameter, is an experimental value, and the value of k is flexibly selected according to the specific conference scene characteristics;
as can be seen from equation (1.3), the speaker signal with background noise removed at any time can be obtained as long as n (t) and y (t) are known.
The invention has the advantage of effectively removing the background noise in the voice signal.

Claims (1)

1. A background noise removing method based on learning compensation is characterized in that: the method comprises the following steps:
step (1): scene-based noise classification: according to the conference scale, dividing the conference scene background noise data set into small conference background noise, medium conference background noise and large conference background noise;
wherein: the scene-based noise classification specifically includes: firstly, screening out representative partial samples which are uniform in background noise distribution and easy to extract from a conference scene background noise data set, then dividing the samples into small conference background noise, medium conference background noise and large conference background noise according to the conference scale, then respectively carrying out data cleaning on the classified background noise, then separating out background noise signals from sample voice data, splicing the noise signals into a plurality of noise files with consistent time length, and finally carrying out manual marking on the noise files, so that the conference scene background noise data set is divided into the small conference background noise, the medium conference background noise and the large conference background noise;
step (2): the background noise estimation method specifically comprises the following steps:
step (2.1): learning the characteristics of the background noise by adopting a GMM model, and respectively obtaining background noise distribution of the background noise of the small conference, the background noise of the medium conference and the background noise of the large conference;
step (2.2): identifying the background noise of which scale the collected voice signal belongs to through the GMM, and finally selecting the background noise distribution of the corresponding scale according to the identification result;
and (3): according to the background noise distribution estimated by the collected voice signals, compensating the collected voice signals by adopting a noise learning compensation algorithm so as to remove the background noise in the collected voice signals;
wherein, in the noise learning compensation algorithm, the voice signal collected by the collecting device is composed of the speaker signal and the background noise signal, the relation between the speaker signal and the background noise signal is shown in the formula 1.1,
y(t)=x(t)+n(t)·w(1.1)
wherein, y (t) is the collected voice signal, x (t) is the speech signal, n (t) is the background noise estimated in step 2, and w is the adaptive adjustment background noise parameter;
the solving process of w is shown as a formula (1.2), and the selection of w fully considers the time domain distribution of k moments before t moment;
Figure FDA0002622433700000011
therefore, the specific calculation of the speaker signal in the collected speech signal is shown in equation 1.3:
Figure FDA0002622433700000012
wherein y (t) is the collected voice signal, x (t) is the speech signal, n (t) is the background noise estimated from the voice signal collected in step (2), and k is the adjustment parameter.
CN201910182463.7A 2019-03-11 2019-03-11 Background noise removing method based on learning compensation Active CN109841227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910182463.7A CN109841227B (en) 2019-03-11 2019-03-11 Background noise removing method based on learning compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910182463.7A CN109841227B (en) 2019-03-11 2019-03-11 Background noise removing method based on learning compensation

Publications (2)

Publication Number Publication Date
CN109841227A CN109841227A (en) 2019-06-04
CN109841227B true CN109841227B (en) 2020-10-02

Family

ID=66885637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910182463.7A Active CN109841227B (en) 2019-03-11 2019-03-11 Background noise removing method based on learning compensation

Country Status (1)

Country Link
CN (1) CN109841227B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022108A (en) * 2022-06-16 2022-09-06 深圳市欢太科技有限公司 Conference access method, conference access device, storage medium and electronic equipment
CN118471246B (en) * 2024-07-09 2024-10-11 杭州知聊信息技术有限公司 Audio analysis noise reduction method, system and storage medium based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1296607A (en) * 1998-02-04 2001-05-23 夸尔柯姆股份有限公司 System and method for noise-compensated speech recognition
JP2007279349A (en) * 2006-04-06 2007-10-25 Toshiba Corp Feature amount compensation apparatus, method, and program
CN101710490A (en) * 2009-11-20 2010-05-19 安徽科大讯飞信息科技股份有限公司 Method and device for compensating noise for voice assessment
WO2011159628A1 (en) * 2010-06-14 2011-12-22 Google Inc. Speech and noise models for speech recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1296607A (en) * 1998-02-04 2001-05-23 夸尔柯姆股份有限公司 System and method for noise-compensated speech recognition
JP2007279349A (en) * 2006-04-06 2007-10-25 Toshiba Corp Feature amount compensation apparatus, method, and program
CN101710490A (en) * 2009-11-20 2010-05-19 安徽科大讯飞信息科技股份有限公司 Method and device for compensating noise for voice assessment
WO2011159628A1 (en) * 2010-06-14 2011-12-22 Google Inc. Speech and noise models for speech recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《偏相干分析识别噪声源的计算》;赵海澜等;《噪声与振动控制》;20050831(第5期);第31-33页 *

Also Published As

Publication number Publication date
CN109841227A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
DE69121145T2 (en) SPECTRAL EVALUATION METHOD FOR IMPROVING RESISTANCE TO NOISE IN VOICE RECOGNITION
CN109890043B (en) Wireless signal noise reduction method based on generative countermeasure network
CN109841227B (en) Background noise removing method based on learning compensation
CN101526994B (en) Fingerprint image segmentation method irrelevant to collecting device
CN105225672B (en) Merge the system and method for the dual microphone orientation noise suppression of fundamental frequency information
CN112017682B (en) Single-channel voice simultaneous noise reduction and reverberation removal system
CN110544482B (en) Single-channel voice separation system
CN114963030A (en) Water supply pipeline monitoring method
CN107886050A (en) Utilize time-frequency characteristics and the Underwater targets recognition of random forest
DE102015221764A1 (en) Method for adjusting microphone sensitivities
CA3136870A1 (en) Method and apparatus for determining a deep filter
CN111508528B (en) No-reference audio quality evaluation method and device based on natural audio statistical characteristics
CN106803089B (en) Method for separating image information from image sequence based on nonlinear principal component analysis
CN111009259B (en) Audio processing method and device
CN101533642B (en) Method for processing voice signal and device
CN111402918A (en) Audio processing method, device, equipment and storage medium
CN108510996B (en) Fast iteration adaptive filtering method
CN103903631B (en) Voice signal blind separating method based on Variable Step Size Natural Gradient Algorithm
CN110299133A (en) The method for determining illegally to broadcast based on keyword
CN109272987A (en) A kind of sound identification method sorting coal and spoil
CN115691535A (en) RNN-based high signal-to-noise ratio voice noise reduction method, device, equipment and medium
CN118782067B (en) Audio signal noise suppression method and system
Sahu et al. Image denoising using principal component analysis in wavelet domain and total variation regularization in spatial domain
CN118364369B (en) Radiation source individual identification method based on distortion filtering
Cheng et al. The noise reduction of speech signals based on RBFN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant