CN114938487B - Hearing aid self-checking method based on sound field scene discrimination - Google Patents
Hearing aid self-checking method based on sound field scene discrimination Download PDFInfo
- Publication number
- CN114938487B CN114938487B CN202210521817.8A CN202210521817A CN114938487B CN 114938487 B CN114938487 B CN 114938487B CN 202210521817 A CN202210521817 A CN 202210521817A CN 114938487 B CN114938487 B CN 114938487B
- Authority
- CN
- China
- Prior art keywords
- patient
- sequence
- parameter
- data
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000011156 evaluation Methods 0.000 claims abstract description 31
- 238000012360 testing method Methods 0.000 claims abstract description 30
- 238000005070 sampling Methods 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims description 27
- 230000007613 environmental effect Effects 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 17
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 230000000694 effects Effects 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 9
- 239000000523 sample Substances 0.000 claims description 9
- 210000005069 ears Anatomy 0.000 claims description 8
- 230000014509 gene expression Effects 0.000 claims description 8
- 238000010998 test method Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 3
- 238000012076 audiometry Methods 0.000 claims description 2
- 239000012488 sample solution Substances 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 abstract description 5
- 238000013459 approach Methods 0.000 abstract description 3
- 230000000875 corresponding effect Effects 0.000 description 29
- 238000012545 processing Methods 0.000 description 6
- 230000002068 genetic effect Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 206010011878 Deafness Diseases 0.000 description 1
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000010370 hearing loss Effects 0.000 description 1
- 231100000888 hearing loss Toxicity 0.000 description 1
- 208000016354 hearing loss disease Diseases 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
- H04R25/507—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/30—Monitoring or testing of hearing aids, e.g. functioning, settings, battery power
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Neurosurgery (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Automation & Control Theory (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a hearing aid self-adaptation method based on sound scene discrimination. Firstly, patient user data is acquired, and the traditional patient parameter group similar to a patient is precisely matched by using the proposed similarity matching algorithm and the optimized sound scene discrimination algorithm and is used as a sub-parameter group of the patient. Secondly, sampling comparison operation is carried out on the sub-parameter groups, the sub-parameter groups are optimized according to the comparison preference degree fed back each time, and a group of initial optimal parameters can be obtained after comparison is finished. The patient user then makes a 5-level evaluation of the test speech formed by the initial optimal parameters and fine-adjusts the gain through a new approach of problem guidance in combination with the deep learning algorithm until the patient's evaluation is satisfactory. The method meets the personalized requirements of patients, and further improves the accuracy of the hearing aid parameters and the satisfaction of the patients.
Description
Technical Field
The invention relates to the technical field of audio signal processing, in particular to a hearing aid self-adaptation method based on sound scene discrimination.
Background
At present, the aging phenomenon in China is increasingly remarkable, the proportion of hearing loss people to the total population is gradually increased, and the current professional hearing practitioners such as fitters and the like have huge talents. Traditional hearing aids are easy to bring worse test and match experience to patients due to complicated test and match procedures, so that the patients generate conflicted emotion. With the upgrade of intelligent devices such as mobile phones and computers, self-fitting hearing aids are receiving a great deal of attention. Under the background, the self-checking formula rule for reasonably developing and optimizing the hearing aid has great research value and prospect.
The self-fitting hearing aid aims at getting rid of the traditional complicated fitting procedure and the dependence of professional hearing talents and realizing the self-fitting of patients. The core is to acquire the satisfactory hearing aid fitting parameters of a patient by using an audio processing algorithm. Early related studies relied primarily on different linear or nonlinear prescription formulas to achieve gain compensation for corresponding hearing impaired patients. Subsequently, part of relevant researchers propose intelligent processing algorithms based on early researches to further optimize parameters such as adjustment gain. The genetic algorithm as proposed in the early part of this century optimizes hearing aid echo cancellation parameters. Based on this, an interactive method based on evolutionary computation is proposed for hearing aid self-fitting. However, the conventional evolutionary algorithm such as genetic algorithm is unfavorable for the practicality of the hearing aid algorithm due to the defects of high iteration times, low convergence speed, easy sinking into local optimum and the like. In addition, the daily environment of a patient is often ignored in more self-fitting hearing aid researches, the patient is defaulted to be in a more ideal environment state, and the important influence of a real scene on the self-fitting voice processing performance is ignored. If the patient's preferences for hearing aid gain frequency response and compression parameter settings are dependent on the sound environment in which they are routinely located.
It can be seen that there is still a great space for optimization of the above-mentioned existing hearing aid self-fitting algorithm. Based on challenges of the hearing aid at the present stage, how to establish a hearing aid self-fitting method based on sound scene discrimination has great research value and significance.
Disclosure of Invention
The invention aims to: the invention gets rid of the complicated fitting process of the traditional hearing aid, and solves the problems of low fitting efficiency, low parameter accuracy and the like caused by the fact that the self-fitting hearing aid does not consider environmental factors, does not fully utilize patient data and does not feel optimal fitting parameters according to patients. According to the invention, not only are the influence factors of the acoustic environment considered, but also the parameter group of the patient is precisely matched by combining the optimized acoustic discrimination algorithm with the similarity matching algorithm, and the matching process is shortened by utilizing the sampling comparison operation, so that the algorithm complexity is reduced. In addition, the new mode of combining the deep learning algorithm and the problem-guided gain adjustment is adopted, so that the method is more in line with personalized difference and feeling of patients, and is beneficial to further improving the accuracy of hearing aid parameters and the satisfaction of patients.
The technical scheme is as follows: in order to solve the technical problems and achieve the aim of the invention, the technical proposal adopted by the invention is as follows: a self-adaptation method based on sound scene discrimination comprises the following steps:
step 1: acquiring patient data information to form a data feature sequence X of a current patient user ori ;
Step 2: based on the current patient data acquired in step 1The feature sequence and the existing patient information feature sequence acquire a sub-parameter group C by utilizing a similarity matching algorithm and an acoustic scene discrimination algorithm 3 ;
Step 3: sub-parameter group C obtained in step 2 3 =[C 1 ,C 2 ,…,C k ,…C m ]M represents C 3 The number of parameter sequences of (a);
for sub-parameter group C 3 Sampling and comparing to obtain a group of initial optimal parameter sequences
Further, the self-adaptation method based on sound scene discrimination further comprises the following steps:
step 4, the patient is according to the initial optimal parameter sequenceAnd 5, performing 5-level voice evaluation on the formed test voice from three aspects of audio quality, hearing comfort and voice definition, and if the evaluation is satisfactory, turning to the step 6, otherwise turning to the step 5.
Step 5: the gain is finely adjusted by means of a problem-guided approach.
Step 6: the patient self-tests to end.
Further, in step 5, the gain is finely adjusted by the problem-guided manner, which specifically includes the following steps:
in step 5.1, the problem guidance of gain adjustment includes 10 problems, namely, too loud voice, too light voice, echo voice, turbidity or unclear voice, harshness voice, clunk voice, difficult voice understanding, unclear main and secondary voice of noisy voice, blurry voice and metallic voice. The patient can select one or more questions according to own experience and feed back the degree of the questions, and the larger the feedback value is, the more serious the corresponding questions are.
S=[s 1 ,s 2 ,…s l ,…s 10 ],s l ∈[0,1] (7)
Wherein S is a problem feedback sequenceColumns, s l Is the feedback value for a particular problem.
Step 5.2, constructing a gain-adjusted neural network
The gain adjustment neural network is a 4-layer neural network, each layer has 256 neurons, the activation function is a ReLU function, and the network weight is theta;
step 5.3 training the gain-adjusted neural network
Traversing initial optimal parameter sequences from knowledge baseAnd selecting 3 groups of gain adjustment data with highest feedback similarity corresponding to the problems for training the gain adjustment neural network.
And 5.4, acquiring a parameter sequence after gain adjustment by using the trained gain adjustment neural network.
The input of the gain adjusting neural network is the network input as the parameter sequence3 sets of historical gain adjustment data g, test speech spectrum h, test speech score value_eval ear And a problem feedback sequence S, the network output being a gain-adjusted parameter sequence, i.e. updated +.>
Further, a corresponding cost function under gain adjustment is constructed as shown in formula (8).
Q(h,g)=E[∑value_eval ear -∑S+αQ(h′,g′)|h,g] (8)
Where Q (h, g) represents the cost function under the current gain adjustment, Q (h ', g') represents the cost function under the last gain adjustment, and α represents the adjustment weight.
Constructing a network training loss function as shown in equation (9) by minimizing the loss function L (θ t ) In the mode of (a), the parameters after gain adjustment are obtained
L(θ t )=E[Value t -Q(h,g;θ t )] (9)
Wherein L (θ) t ) As a loss function, t is the iteration number, θ t Value is the current network weight t For the last maximum value of the cost function,
further, in step 2, based on the current patient data feature sequence obtained in step 1 and the existing patient information feature sequence, a sub-parameter group C is selected through similarity matching and sound scene discrimination 3 The specific method comprises the following steps:
step 2.1, obtaining initial parameter group C in parallel 1 And initial parameter group C 2 ;
The initial parameter group C 1 The method comprises the steps of performing similarity matching firstly by an algorithm, then performing sound field scene discrimination, and particularly calculating the similarity between a current patient data characteristic sequence and an existing patient information characteristic sequence, sequencing from high to low, selecting the first half as sound scene discrimination data, and taking the optimal parameters corresponding to each sequence under the category of the patient as an initial parameter group C 1 。
The initial parameter group C 2 The algorithm firstly carries out sound scene discrimination and then carries out similarity matching to obtain, the specific pre-sound scene discrimination obtains the label class of the patient, then calculates the similarity between each sequence under the label class and the characteristic sequence of the patient, sorts the sequences from high to low, and selects the optimal parameter corresponding to the first half sequence as the initial parameter group C 2 。
Step 2.2, C 1 And C 2 Merging and de-duplicating to form a sub-parameter group C 3 。
Further, in step 3, the sub-parameter group C 3 Sampling and comparing to obtain a group of initial optimal parameter sequencesThe specific method comprises the following steps:
randomly extracting two sets of parameter sequencesAnd->The patient gives corresponding preference degree according to the comparison feeling of different parameter sequences under the same test audio>Its value range [0,1 ]]. When the preference degree is closer to 0, the preference is more preferredSequence, conversely, preference->Sequence. According to preference value->Updating sub-parameter group C 3 A positive feedback loop operation is formed until the preference degree is 50% or a certain parameter sequence loops 2 times to finish the link.
Its sub-parameter group C 3 Is updated in, e.g., patient preferencesAcquiring the parameter sequence->And selecting the neighbor parameter sequence with the highest association degree as the next group of comparison parameter sequences. Thereby making the sub-parameter group C 3 And (5) reducing to a parameter group formed by the preference parameter sequence and the neighbor parameter sequence. The operation continuously reduces the range of the sub-parameter group of the sampling comparison, and reduces the complexity of the algorithm. The correlation degree of the parameter sequences can be obtained by using the formulas (2), (3) and (4). After this step is finished, a set of initial optimal parameter sequences +.>
Further, the similarity f in step 2 k Solving is shown in expressions (2) (3) (4), and sound field scene discrimination is shown in expressions (5) (6).
And mapping the characteristic data of the patient. First, a sequence X corresponding to the original characteristics of the patient is generated ori Equal length characteristic variable x= [ X ] 1 ,x 2 ,…,x o ,…x n ]Wherein the element value range is [0,1 ]]. And (3) carrying out normalization processing on the same category of characteristics, wherein the mapping rule is shown in a formula (2).
Wherein k represents a feature sequence number; o represents a specific feature number under the sequence;x represents ori An o-th feature maximum;X represents ori An o-th feature minimum value;X represents ori An o-th feature value;Weights representing the o-th feature under the k-th sequence; n represents the number of features under a single feature sequence; f (f) k Indicating the troubleSimilarity between the patient characteristic sequence and the previous kth patient characteristic sequence; y is Y k Represents the mapping variable corresponding to the previous kth patient characteristic sequence, representing the square of the 2 norms;
further, the sound field scene discrimination algorithm in the step 2 depends on the selection of 2-dimensional environmental features, and the two-dimensional environmental feature data is assumed to be Independent of each other, the scene category set is D= [ D ] 1 ,D 2 ,…,D b ,…D num ]Num is a classification number, the value range of the num is 3-6, and the common categories are indoor, outdoor, music and general. To-be-classified set X for known classification in database a Training to obtain corresponding likelihood probability P (x a |D b ) And a priori probabilities P (D b ). When the environmental characteristic data is input, the corresponding class label can be obtained according to the formula (5).
In addition, the method is further optimized for the condition that the patient environment characteristic data has no corresponding sample in the training set. Selecting 3 sample solutions approximating the patient environmental characteristic data, i.e. mapping a similar sample set x a →X similar And obtaining the nearest category label by using the formula (6).
Wherein d represents patient environment characteristic data x a And training set X a Euclidean distance between samples; x is X similar Represents x a Is a similar sample of (1);and a sound scene category label corresponding to the patient is represented.
Further, in step 1, the patient data information includes 4-dimensional basic information, 2 sets of 11-dimensional audiograms, 2 sets of 2-dimensional speech audiometric data, 2 sets of 3-dimensional speech evaluations, and 2-dimensional environmental data;
the 4-dimensional basic information comprises age, gender, hearing aid wearing time and medical history selection; 2 groups of 11-dimensional audiograms, namely the hearing threshold values of left and right ears of a patient at 11 frequency points, wherein the 11 frequency points are 125Hz,250Hz,500Hz,750Hz,1KHz,1.5KHz,2KHz,3KHz,4KHz,6KHz and 8KHz respectively; 2 sets of 2-dimensional speech audiometric data, namely speech recognition rate and speech recognition valve of the left and right ears of the patient; 2-dimensional environmental data is the selection of daily activity scenes and corresponding activities by a patient, wherein the daily activity scenes comprise indoor, outdoor, natural, factory, cinema and noisy environments, and the activities comprise office, leisure, sports, boring and music.
Further, the three dimensions perform 5-level speech evaluation on audiometric speech, namely, performing 5-level speech evaluation on the left and right ears from three aspects of audio quality, hearing comfort and speech definition.
The speech evaluation is shown in expression (1).
Wherein value_eval ear Represents a monaural speech evaluation integrated value, and ear represents left and right ear identifiers; i denotes a speech sequence number, j denotes sequence numbers of three evaluation aspects,represents the jth score under the ear ith speech. M represents the number of evaluation indexes. N represents the number of test voices and the value rangeThe circumference is 20 to 40.
Compared with the prior art, the invention has the following beneficial effects:
1) The invention considers the daily environmental factors of the patient, and can quickly and accurately acquire the parameter group similar to the patient by combining the optimized sound field scene discrimination algorithm with the similarity matching algorithm. Greatly reduces the test matching search range and improves the test matching efficiency;
2) The invention reasonably considers the individuation difference of patients, and continuously optimizes the parameter groups tested by the patients by sampling comparison operation and utilizing the preference degree of the patients for listening to the audio under two different parameters. Not only reduces the complexity of the algorithm, but also shortens the verification process and improves the satisfaction of users;
3) The invention utilizes a novel mode of combining problem-guided gain adjustment and a deep learning algorithm to finely adjust the parameter gain according to the individual requirements of the patient, thereby further improving the accuracy of the hearing aid parameters to meet the individual requirements of the patient.
Drawings
FIG. 1 is a diagram of a model structure based on sound scene discrimination in the present invention;
FIG. 2 is a schematic diagram of a parameter update strategy of the present invention.
FIG. 3 is a schematic diagram of a comparison of speech recognition rates of a self-matching algorithm according to an embodiment of the present invention.
Detailed Description
The invention will be further elucidated with reference to the drawings of the specification. The invention discloses a hearing aid self-test method based on sound scene discrimination, aiming at getting rid of traditional complicated test and distribution flow and transition dependence of hearing specialists. Firstly, patient data is acquired, and secondly, a sound scene discrimination algorithm and a similarity algorithm are combined to acquire corresponding sub-parameter groups. And (3) sampling and comparing the sub-parameter groups under the corresponding categories of the patients, and feeding back the preference degree of each comparison to the network optimization sub-parameter groups to form positive feedback circulation operation. The optimal selection after the cycle is completed is the initial optimal parameters of the patient user. Then, the patient user makes 5-level evaluation according to the test voice formed by the initial optimal parameters, and the gain is finely adjusted by a new mode of combining the problem guidance and the deep learning algorithm until the patient evaluation is satisfied. According to the invention, acoustic environment influence factors are emphasized, the traditional patient data are fully utilized, the historical similar groups of the patient can be rapidly obtained by utilizing the similarity matching algorithm and the optimized sound scene discrimination algorithm, the test and allocation flow of the patient is shortened by utilizing the sampling comparison operation, and the accuracy of parameters is further improved by combining deep learning with problem guidance and novel gain adjustment.
As shown in fig. 1, the invention discloses a hearing aid self-test method based on sound scene discrimination, which comprises the following steps:
step 1: patient-related data information, 4-dimensional basic information, 2 sets of 11-dimensional audiograms, 2 sets of 2-dimensional speech audiometry data, 2 sets of 3-dimensional speech evaluations, and 2-dimensional environmental data are acquired. Wherein, 4-dimensional basic information (including age, sex, hearing aid wearing time, medical history selection); 2 sets of 11-dimensional audiograms, i.e., the hearing thresholds of the patient's left and right ears at 11 frequency points (in Hz) (11 frequency points are 125Hz,250Hz,500Hz,750Hz,1KHz,1.5KHz,2KHz,3KHz,4KHz,6KHz,8KHz, respectively); 2 sets of 2-dimensional speech audiometric data, namely speech recognition rate and speech recognition valve of the left and right ears of the patient; 2 groups of 3-dimensional voice evaluation, namely 5-level voice evaluation is carried out on audiometric voice by a patient from three aspects of audio quality, hearing comfort and voice definition on the left ear and the right ear respectively; 2-dimensional environmental data is the selection of a patient for a daily activity scenario (indoor, outdoor, natural, factory, cinema, noisy environment) and a corresponding activity (office, leisure, sports, boring, music). Data characteristic sequence X forming the current patient user ori . Wherein, the speech evaluation in step 1 is shown in expression (1).
Wherein value_eval ear Represents a monaural speech evaluation integrated value, and ear represents left and right ear identifiers; i denotes a speech sequence number, j denotes sequence numbers of three evaluation aspects,represents the jth score under the ear ith speech. M represents the number of evaluation indexes. N represents the number of the test voices, and the range of the value is 20-40.
Step 2: as shown in fig. 2, based on the patient data acquired in step 1, an initial parameter group C is acquired in parallel 1 And initial parameter group C 2 ,C 1 And C 2 Merging and de-duplicating to form a sub-parameter group C 3 . The parallel acquisition aims at preventing the loss of a better parameter sequence under the condition of single acquisition and realizing accurate matching of the parameter group of a patient. Initial parameter group C 1 The method comprises the steps of performing similarity matching firstly by an algorithm, then performing sound field scene discrimination, and particularly calculating the similarity between a current patient data characteristic sequence and an existing patient information characteristic sequence, sequencing from high to low, selecting the first half as sound scene discrimination data, and taking the optimal parameters corresponding to each sequence in the class of the patient as an initial parameter group C 1 . Initial parameter group C 2 The algorithm firstly carries out sound scene discrimination and then carries out similarity matching to obtain, the specific pre-sound scene discrimination obtains the label class of the patient, then calculates the similarity between each sequence under the label class and the characteristic sequence of the patient, sorts the sequences from high to low, and selects the optimal parameter corresponding to the first half sequence as the initial parameter group C 2 . Wherein, the similarity f in the step 2 k Solving is shown in expressions (2) (3) (4), and sound field scene discrimination is shown in expressions (5) (6).
To eliminate the order of magnitude differences between the dimensional feature data, the patient feature data is mapped. First, a sequence X corresponding to the original characteristics of the patient is generated ori Equal length characteristic variable x= [ X ] 1 ,x 2 ,…,x o ,…x n ]Wherein the element value range is [0,1 ]]. And (3) carrying out normalization processing on the same category of characteristics, wherein the mapping rule is shown in a formula (2).
Wherein k represents a feature sequence number; o represents a specific feature number under the sequence;x represents ori An o-th feature maximum;X represents ori An o-th feature minimum value;X represents ori An o-th feature value;Weights representing the o-th feature under the k-th sequence; n represents the number of features under a single feature sequence; f (f) k Representing similarity between the patient characteristic sequence and the previous kth patient characteristic sequence; y is Y k Mapping variables corresponding to the previous kth patient characteristic sequence are represented, < >> Representing the square of the 2 norms;
the sound scene discrimination is mainly based on the selection of 2-dimensional environmental characteristics, and two-dimensional environmental characteristic data is assumed to be Independent of each other, and the class set is D= [ D ] 1 ,D 2 ,…,D b ,…D num ],numThe value range of the classification number is 3-6, and the common categories are indoor, outdoor, music and general. To-be-classified set X for known classification in database a Training to obtain corresponding likelihood probability P (x a |D b ) And a priori probabilities P (D b ). When the environmental characteristic data is input, the corresponding class label can be obtained according to the formula (5).
In addition, the method is further optimized for the condition that the patient environment characteristic data has no corresponding sample in the training set. Selecting 3 samples approximating the patient environmental characteristic data for solving, i.e. mapping a similar sample set x a →X similar And obtaining the nearest category label by using the formula (6).
Wherein d represents patient environment characteristic data x a And training set X a Euclidean distance between samples; x is X similar Represents x a Is a similar sample of (1);and a sound scene category label corresponding to the patient is represented.
Step 3: obtaining the sub-parameter group C from the step 2 3 =[C 1 ,C 2 ,…,C k ,…C m ]M represents C 3 Is a parameter sequence number of (a). For sub-parameter group C 3 Sampling and comparing, randomly extracting two parameter sequencesAnd->Patient comparison of different parameter sequences according to the same test audioFeel, give corresponding preference degree +.>Its value range [0,1 ]]. When the preference degree is closer to 0, the more preferred +.>Sequence, conversely, preference->Sequence. According to preference value->Updating sub-parameter group C 3 A positive feedback loop operation is formed until the preference is 50% or the loop of a certain parameter sequence ends 2 times. Its sub-parameter group C 3 Is updated in, for example, patient preference +.>Acquiring the parameter sequence->Selecting the neighbor parameter sequence with highest association degree as the next group of comparison parameter sequences, thereby leading the subparameter group C to 3 And (5) reducing to a parameter group formed by the preference parameter sequence and the neighbor parameter sequence. The operation continuously reduces the range of the sub-parameter group of the sampling comparison, and reduces the complexity of the algorithm. The correlation degree of the parameter sequences can be obtained by using the formulas (2), (3) and (4). After the step is finished, a group of initial optimal parameter sequences are obtained
Step 4: the patient is based on an initial optimal parameter sequenceThe formed test voice can be subjected to 5-level voice evaluation from three aspects of audio quality, hearing comfort and voice definitionObtained by calculation of the formula (1). If the evaluation is satisfactory, the step 6 is carried out, otherwise, the step 5 is carried out. />
Step 5: the gain is finely adjusted by means of a problem-guided approach. The problem guidance for gain adjustment includes 10 problems, namely, too loud voice, too light voice, echo, cloudiness or unclear voice, harshness voice, clunk voice, difficult voice understanding, unclear main and secondary voice noisy, blurry voice and metallic voice. The patient can select one or more questions according to own experience and feed back the degree of the questions, and the larger the feedback value is, the more serious the corresponding questions are.
S=[s 1 ,s 2 ,…s l ,…s 10 ],s l ∈[0,1] (7)
Wherein S is a problem feedback sequence, S l Is the feedback value for a particular problem.
Traversing initial optimal parameter sequences from knowledge baseAnd selecting 3 groups of gain adjustment data with highest feedback similarity corresponding to the problems for training the gain adjustment neural network. A4-layer neural network is constructed, each layer has 256 neurons, the activation function is a ReLU function, and the network weight is theta. The network input is a parameter sequence->3 sets of historical gain adjustment data g, test speech spectrum h, test speech score value_eval ear And a problem feedback sequence S. The network output is a gain-adjusted parameter sequence, i.e. updated +>The corresponding cost function under gain adjustment is constructed as shown in equation (8).
Q(h,g)=E[∑value_eval ear -∑S+aQ(h′,g′)|h,g] (8)
Where Q (h, g) represents the cost function under the current gain adjustment, Q (h ', g') represents the cost function under the last gain adjustment, and α represents the adjustment weight.
Constructing a network training loss function as shown in equation (9) by minimizing the loss function L (θ t ) In the mode of (a), the parameters after gain adjustment are obtained
L(θ t )=E[Value t -Q(h,g;θ t )] (9)
Wherein L (θ) t ) As a loss function, t is the iteration number, θ t Value is the current network weight t For the last maximum value of the cost function,
step 6: the patient self-tests to end.
Fig. 3 is a graph showing the comparison of the effects of eight patients under different self-test algorithms, including conventional test algorithms, genetic interactive test algorithms and the algorithms proposed by the present invention. As can be seen from FIG. 3, in the aspect of voice test, the method provided by the invention has a better verification effect, the average recognition rate can reach 81.5%, the improvement is 12.3% compared with a genetic interaction algorithm, and the improvement is 15.6% compared with a traditional algorithm. The recognition rate of the patient T2 is highest and can reach 89.3%, the recognition rate of the patient T7 is lowest and reaches 72.5%, and the improvement effect of the patient T3 is most obvious.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.
Claims (10)
1. The hearing aid self-test method based on sound scene discrimination is characterized by comprising the following steps of:
step 1: acquiring patient data information to form a data feature sequence X of a current patient user ori The patient data information includes 2-dimensional environmental data, namely daily activity scenes andcorresponding activities;
step 2: based on the current patient data characteristic sequence and the existing patient information characteristic sequence obtained in the step 1, obtaining a subparameter group C by utilizing a similarity matching algorithm and an acoustic scene discrimination algorithm 3 The specific method comprises the following steps:
step 2.1, obtaining initial parameter group C in parallel 1 And initial parameter group C 2 ;
The initial parameter group C 1 The method comprises the steps of performing similarity matching firstly by an algorithm, then performing sound field scene discrimination, and particularly calculating the similarity between a current patient data characteristic sequence and an existing patient information characteristic sequence, sequencing from high to low, selecting the first half as sound scene discrimination data, and taking the optimal parameters corresponding to each sequence under the category of the patient as an initial parameter group C 1 ;
The initial parameter group C 2 The algorithm firstly carries out sound scene discrimination and then carries out similarity matching to obtain, the specific pre-sound scene discrimination obtains the label class of the patient, then calculates the similarity between each sequence under the label class and the characteristic sequence of the patient, sorts the sequences from high to low, and selects the optimal parameter corresponding to the first half sequence as the initial parameter group C 2 ;
Step 2.2, C 1 And C 2 Merging and de-duplicating to form a sub-parameter group C 3 ;
Step 3: sub-parameter group C obtained in step 2 3 =[C 1 ,C 2 ,…,C k ,…C m ]M represents C 3 The number of parameter sequences of (a); for sub-parameter group C 3 Sampling and comparing to obtain a group of initial optimal parameter sequencesThe method comprises the following steps:
randomly extracting two sets of parameter sequencesAnd->Patient(s)According to the comparison feeling of different parameter sequences under the same test audio, the corresponding preference degree is given>Its value range [0,1 ]]The method comprises the steps of carrying out a first treatment on the surface of the When the preference degree is closer to 0, the more preferred +.>Sequence, conversely, preference->A sequence; according to preference value->Updating sub-parameter group C 3 A positive feedback loop operation is formed until the preference degree is 50% or a certain parameter sequence loops 2 times to finish the link.
2. The hearing aid self-fitting method based on sound scene discrimination according to claim 1, further comprising the steps of:
step 4, the patient is according to the initial optimal parameter sequencePerforming 5-level voice evaluation on the formed test voice from three aspects of audio quality, hearing comfort and voice definition, if the evaluation is satisfactory, turning to the step 6, otherwise turning to the step 5;
step 5: finely adjusting the gain in a problem-guided manner;
step 6: the patient self-tests to end.
3. The hearing aid self-adaptation method based on sound scene discrimination according to claim 2, wherein the step 5 finely adjusts the gain by a problem guidance method, specifically comprising the steps of:
step 5.1, the problem guidance of gain adjustment comprises 10 problems, namely, too loud voice, too light voice, echo voice, turbidity or unclear voice, harshness voice, clunk voice, difficult voice understanding, unclear main and secondary voice of noisy voice, blurry voice and metallic voice; according to self feeling, a patient can select one or more problems and feed back the degree of the problems, and the larger the feedback value is, the more serious the corresponding problem is;
S=[s 1 ,s 2 ,…s l ,…s 10 ],s l ∈[0,1] (7)
wherein S is a problem feedback sequence, S l Is a feedback value under a specific problem;
step 5.2, constructing a gain-adjusted neural network
The gain adjustment neural network is a 4-layer neural network, each layer has 256 neurons, the activation function is a ReLU function, and the network weight is theta;
step 5.3 training the gain-adjusted neural network
Traversing initial optimal parameter sequences from knowledge baseSelecting 3 groups of gain adjustment data with highest feedback similarity corresponding to the problems for training the gain adjustment neural network;
step 5.4, obtaining a parameter sequence after gain adjustment by using the trained gain adjustment neural network;
the input of the gain adjusting neural network is the network input as the parameter sequence3 sets of historical gain adjustment data g, test speech spectrum h, test speech score value_eval ear And a problem feedback sequence S, the network output being a gain-adjusted parameter sequence, i.e. updated +.>
4. A hearing aid self-fitting method based on sound scene discrimination according to claim 3, wherein constructing a corresponding cost function under gain adjustment is as shown in formula (8):
Q(h,g)=E[∑value_eval ear -∑S+aQ(h′,g′)|h,g] (8)
wherein Q (h, g) represents a cost function under current gain adjustment, Q (h ', g') represents a cost function under last gain adjustment, and alpha represents an adjustment weight;
constructing a network training loss function as shown in equation (9) by minimizing the loss function L (θ t ) In the mode of (a), the parameters after gain adjustment are obtained
L(θ t )=E[Value t -Q(h,g;θ t )] (9)
5. the hearing aid self-test method based on sound scene discrimination according to claim 1, wherein the sub-parameter group C in step 3 3 Is updated in, e.g., patient preferencesAcquiring the parameter sequence->Selecting the neighbor parameter sequence with the highest association degree as the next group of comparison parameter sequences; thereby making the sub-parameter group C 3 Reducing to a parameter group formed by a preference parameter sequence and a neighbor parameter sequence; after the step is finished, a group of initial optimal parameter sequences are obtained
6. The hearing aid self-fitting method based on sound field scene discrimination according to claim 1 or 5, wherein in step 2, the similarity between the current patient data feature sequence and the existing patient information feature sequence is calculated, specifically, the similarity f between the patient feature data sequence and the previous kth patient feature sequence is calculated k Solving as shown in expressions (2) (3) (4);
wherein k represents a feature sequence number; o represents a specific feature number under the sequence;x represents ori An o-th feature maximum;X represents ori An o-th feature minimum value;X represents ori An o-th feature value;Represent under the kth sequenceWeights of the o-th feature; n represents the number of features under a single feature sequence; y is Y k Mapping variables corresponding to the previous kth patient characteristic sequence are represented, < >> Representing the square of the 2 norms.
7. The hearing aid self-fitting method based on sound scene discrimination according to claim 6, wherein in step 3, as in patient preferenceAcquiring the parameter sequence->And selecting the neighbor parameter sequence with the highest association degree as the next group of comparison parameter sequences.
8. The hearing aid self-adaptation method based on sound scene discrimination according to claim 1, wherein the sound scene discrimination algorithm in step 2 relies on the 2-dimensional environmental feature selection in step 1, and the two-dimensional environmental feature data is assumed to be Independent of each other, the scene category set is D= [ D ] 1 ,D 2 ,…,D b ,…D num ]Num is a classification number, the value range of the num is 3-6, and the common categories are indoor, outdoor, music and general; to-be-classified set X for known classification in database a Training to obtain corresponding likelihood probability P (x a |D b ) And a priori probabilities P (D b ) The method comprises the steps of carrying out a first treatment on the surface of the When the environmental characteristic data is input, a corresponding class label can be obtained according to the formula (5);
in addition, for the condition that the patient environment characteristic data has no corresponding sample in the training set, further optimization is carried out; selecting 3 sample solutions approximating the patient environmental characteristic data, i.e. mapping a similar sample set x a →X similar Then, the nearest class label is obtained by using the formula (6);
9. The hearing aid self-fitting method based on sound scene discrimination according to claim 1, wherein in step 1, the patient data information includes 4-dimensional basic information, 2 sets of 11-dimensional audiograms, 2 sets of 2-dimensional speech audiometry data, 2 sets of 3-dimensional speech evaluations, and 2-dimensional environmental data;
the 4-dimensional basic information comprises age, gender, hearing aid wearing time and medical history selection; 2 groups of 11-dimensional audiograms, namely the hearing threshold values of left and right ears of a patient at 11 frequency points, wherein the 11 frequency points are 125Hz,250Hz,500Hz,750Hz,1KHz,1.5KHz,2KHz,3KHz,4KHz,6KHz and 8KHz respectively; 2 sets of 2-dimensional speech audiometric data, namely speech recognition rate and speech recognition valve of the left and right ears of the patient; the daily activity scenes in the 2-dimensional environment data include indoor, outdoor, natural, factory, cinema and noisy environments, and the activities in the 2-dimensional environment data include office, leisure, sports, boring and music.
10. The hearing aid self-adaptation method based on sound scene discrimination according to claim 2, wherein the three dimensions are 5-level speech evaluation of audiometric speech, namely 5-level speech evaluation of the left and right ears from three aspects of audio quality, hearing comfort and speech clarity;
the speech evaluation is shown in expression (1);
wherein value_eval ear Represents a monaural speech evaluation integrated value, and ear represents left and right ear identifiers; i denotes a speech sequence number, j denotes sequence numbers of three evaluation aspects,represents the jth score under the ear ith voice; m represents the number of evaluation indexes; n represents the number of the test voices, and the range of the value is 20-40. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210521817.8A CN114938487B (en) | 2022-05-13 | 2022-05-13 | Hearing aid self-checking method based on sound field scene discrimination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210521817.8A CN114938487B (en) | 2022-05-13 | 2022-05-13 | Hearing aid self-checking method based on sound field scene discrimination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114938487A CN114938487A (en) | 2022-08-23 |
CN114938487B true CN114938487B (en) | 2023-05-30 |
Family
ID=82864937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210521817.8A Active CN114938487B (en) | 2022-05-13 | 2022-05-13 | Hearing aid self-checking method based on sound field scene discrimination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114938487B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116614757B (en) * | 2023-07-18 | 2023-09-26 | 江西斐耳科技有限公司 | Hearing aid fitting method and system based on deep learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113411733A (en) * | 2021-06-18 | 2021-09-17 | 南京工程学院 | Parameter self-adjusting method for fitting-free hearing aid |
CN114339564A (en) * | 2021-12-23 | 2022-04-12 | 清华大学深圳国际研究生院 | User self-adaptive hearing aid self-fitting method based on neural network |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9363614B2 (en) * | 2014-02-27 | 2016-06-07 | Widex A/S | Method of fitting a hearing aid system and a hearing aid fitting system |
CN104053112B (en) * | 2014-06-26 | 2017-09-12 | 南京工程学院 | A kind of audiphone tests method of completing the square certainly |
US10757517B2 (en) * | 2016-12-19 | 2020-08-25 | Soundperience GmbH | Hearing assist device fitting method, system, algorithm, software, performance testing and training |
EP3773197A1 (en) * | 2018-04-11 | 2021-02-17 | Two Pi GmbH | Method for enhancing the configuration of a hearing aid device of a user |
CN108922616A (en) * | 2018-06-26 | 2018-11-30 | 常州工学院 | A kind of hearing aid is quickly from testing method of completing the square |
CN109246515B (en) * | 2018-10-09 | 2019-10-29 | 王青云 | A kind of intelligent earphone and method promoting personalized sound quality function |
CN112653980B (en) * | 2021-01-12 | 2022-02-18 | 东南大学 | Interactive self-checking and matching method for intelligent hearing aid |
-
2022
- 2022-05-13 CN CN202210521817.8A patent/CN114938487B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113411733A (en) * | 2021-06-18 | 2021-09-17 | 南京工程学院 | Parameter self-adjusting method for fitting-free hearing aid |
CN114339564A (en) * | 2021-12-23 | 2022-04-12 | 清华大学深圳国际研究生院 | User self-adaptive hearing aid self-fitting method based on neural network |
Also Published As
Publication number | Publication date |
---|---|
CN114938487A (en) | 2022-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7396607B2 (en) | Systems and processing methods to customize your audio experience | |
CN109246515B (en) | A kind of intelligent earphone and method promoting personalized sound quality function | |
US6879860B2 (en) | Cochlear implant MAP optimization with use of a genetic algorithm | |
CN109151692B (en) | Hearing aid self-checking and matching method based on deep learning network | |
CN112653980B (en) | Interactive self-checking and matching method for intelligent hearing aid | |
US10952649B2 (en) | Hearing assist device fitting method and software | |
US11265659B2 (en) | Method for enhancing the configuration of a hearing aid device of a user | |
CN109999314A (en) | One kind is based on brain wave monitoring Intelligent sleep-assisting system and its sleep earphone | |
CN109327785B (en) | Hearing aid gain adaptation method and device based on speech audiometry | |
CN114938487B (en) | Hearing aid self-checking method based on sound field scene discrimination | |
CN109951783A (en) | For the method based on pupil information adjustment hearing aid configuration | |
CN109448755A (en) | Artificial cochlea's auditory scene recognition methods | |
CN116390013A (en) | Hearing aid self-verification method based on neural network deep learning | |
WO2020024807A1 (en) | Artificial cochlea ambient sound sensing method and system | |
Hüwel et al. | Hearing aid research data set for acoustic environment recognition | |
EP3864862A1 (en) | Hearing assist device fitting method, system, algorithm, software, performance testing and training | |
Völker et al. | Hearing aid fitting and fine-tuning based on estimated individual traits | |
WO2020077348A1 (en) | Hearing assist device fitting method, system, algorithm, software, performance testing and training | |
Wakefield et al. | Genetic algorithms for adaptive psychophysical procedures: recipient-directed design of speech-processor MAPs | |
AU2009279764A1 (en) | Automatic performance optimization for perceptual devices | |
JP2002369292A (en) | Adaptive characteristic hearing aid and optimal hearing aid processing characteristic determining device | |
Nagathil et al. | A feature-based linear regression model for predicting perceptual ratings of music by cochlear implant listeners | |
Legrand et al. | Interactive evolution for cochlear implants fitting | |
CN108922616A (en) | A kind of hearing aid is quickly from testing method of completing the square | |
CN111144482B (en) | Scene matching method and device for digital hearing aid and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |