WO2025041015A1 - Electric-acoustic stimulation parameter adjustment - Google Patents
Electric-acoustic stimulation parameter adjustment Download PDFInfo
- Publication number
- WO2025041015A1 WO2025041015A1 PCT/IB2024/057967 IB2024057967W WO2025041015A1 WO 2025041015 A1 WO2025041015 A1 WO 2025041015A1 IB 2024057967 W IB2024057967 W IB 2024057967W WO 2025041015 A1 WO2025041015 A1 WO 2025041015A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hearing
- measures
- acoustic
- recipient
- obtaining
- Prior art date
Links
- 230000000638 stimulation Effects 0.000 title claims description 148
- 230000013707 sensory perception of sound Effects 0.000 claims abstract description 321
- 238000000034 method Methods 0.000 claims abstract description 91
- 230000008859 change Effects 0.000 claims description 53
- 230000000763 evoking effect Effects 0.000 claims description 48
- 238000012076 audiometry Methods 0.000 claims description 16
- 230000007812 deficiency Effects 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 16
- 238000011065 in-situ storage Methods 0.000 claims description 14
- 238000002847 impedance measurement Methods 0.000 claims description 13
- 238000005259 measurement Methods 0.000 claims description 11
- 238000012360 testing method Methods 0.000 claims description 9
- 239000007943 implant Substances 0.000 description 22
- 230000004044 response Effects 0.000 description 17
- 210000003477 cochlea Anatomy 0.000 description 16
- 230000001965 increasing effect Effects 0.000 description 16
- 238000012544 monitoring process Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 13
- 238000010801 machine learning Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 206010011878 Deafness Diseases 0.000 description 10
- 230000010370 hearing loss Effects 0.000 description 10
- 231100000888 hearing loss Toxicity 0.000 description 10
- 208000016354 hearing loss disease Diseases 0.000 description 10
- 230000008904 neural response Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000008447 perception Effects 0.000 description 9
- 230000004936 stimulating effect Effects 0.000 description 9
- 230000003247 decreasing effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 238000013145 classification model Methods 0.000 description 7
- 238000002513 implantation Methods 0.000 description 7
- 230000000875 corresponding effect Effects 0.000 description 6
- 238000013186 photoplethysmography Methods 0.000 description 6
- 230000003542 behavioural effect Effects 0.000 description 5
- 238000002565 electrocardiography Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000001976 improved effect Effects 0.000 description 5
- 230000002980 postoperative effect Effects 0.000 description 5
- 208000009205 Tinnitus Diseases 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 239000000090 biomarker Substances 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 4
- 210000000959 ear middle Anatomy 0.000 description 4
- 238000001356 surgical procedure Methods 0.000 description 4
- 231100000886 tinnitus Toxicity 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 210000000133 brain stem Anatomy 0.000 description 3
- 210000000860 cochlear nerve Anatomy 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 210000000883 ear external Anatomy 0.000 description 3
- 210000002768 hair cell Anatomy 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 238000002560 therapeutic procedure Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000036982 action potential Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 210000003027 ear inner Anatomy 0.000 description 2
- 238000000537 electroencephalography Methods 0.000 description 2
- 238000002567 electromyography Methods 0.000 description 2
- 238000002570 electrooculography Methods 0.000 description 2
- 230000012010 growth Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000000465 moulding Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- BASFCYQUMIYNBI-UHFFFAOYSA-N platinum Substances [Pt] BASFCYQUMIYNBI-UHFFFAOYSA-N 0.000 description 2
- 230000010287 polarization Effects 0.000 description 2
- 230000011514 reflex Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000001720 vestibular Effects 0.000 description 2
- 230000005788 Cochlea function Effects 0.000 description 1
- 208000032368 Device malfunction Diseases 0.000 description 1
- 206010016654 Fibrosis Diseases 0.000 description 1
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 206010020772 Hypertension Diseases 0.000 description 1
- 206010061218 Inflammation Diseases 0.000 description 1
- 238000004497 NIR spectroscopy Methods 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000010398 acute inflammatory response Effects 0.000 description 1
- 230000006389 acute stress response Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000000570 adjustive effect Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000002902 bimodal effect Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 210000002939 cerumen Anatomy 0.000 description 1
- 230000012085 chronic inflammatory response Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 230000005347 demagnetization Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- UREBDLICKHMUKA-CXSFZGCWSA-N dexamethasone Chemical compound C1CC2=CC(=O)C=C[C@]2(C)[C@]2(F)[C@@H]1[C@@H]1C[C@@H](C)[C@@](C(=O)CO)(O)[C@@]1(C)C[C@@H]2O UREBDLICKHMUKA-CXSFZGCWSA-N 0.000 description 1
- 229960003957 dexamethasone Drugs 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 238000012377 drug delivery Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000004064 dysfunction Effects 0.000 description 1
- 210000000613 ear canal Anatomy 0.000 description 1
- 238000010292 electrical insulation Methods 0.000 description 1
- 238000004520 electroporation Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000001037 epileptic effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 230000004761 fibrosis Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000004054 inflammatory process Effects 0.000 description 1
- 230000028709 inflammatory response Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000011664 nicotinic acid Substances 0.000 description 1
- 230000003534 oscillatory effect Effects 0.000 description 1
- 238000006213 oxygenation reaction Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 210000004049 perilymph Anatomy 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 229910052697 platinum Inorganic materials 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 210000002442 prefrontal cortex Anatomy 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 230000003304 psychophysiological effect Effects 0.000 description 1
- 230000010344 pupil dilation Effects 0.000 description 1
- 231100000430 skin reaction Toxicity 0.000 description 1
- 201000002859 sleep apnea Diseases 0.000 description 1
- 150000003431 steroids Chemical class 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 239000003826 tablet Substances 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 230000008467 tissue growth Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000472 traumatic effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Abstract
Presented herein are a system and a method comprising obtaining one or more electrophysiological measures of acoustic hearing associated with a hearing device recipient, obtaining one or more hearing performance measures of the recipient, analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measure, and generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures.
Description
EUECTRIC-ACOUSTIC STIMUEATION PARAMETER ADJUSTMENT
BACKGROUND
Field of the Invention
[oooi] The present invention relates generally to adjustment of electric-acoustic stimulation parameters.
Related Art
[0002] Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.
[0003] The types of medical devices and the ranges of functions performed thereby have increased over the years. For example, many medical devices, sometimes referred to as “implantable medical devices,” now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.
SUMMARY
[0004] In one aspect, a first method is provided. The first method comprises: obtaining one or more electrophysiological measures of acoustic hearing associated with a hearing device recipient; obtaining one or more hearing performance measures of the recipient; analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures; and generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures.
[0005] In another aspect, a second method is provided. The second method comprises: performing one or more electrophysiological tests to obtain one or more evoked potentials or one or more impedances from a hearing device recipient; correlating the one or more evoked potentials or impedances with one or more hearing performance measures; and automatically providing one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating.
[0006] In another aspect, one or more non-transitory computer readable storage media are provided. The one or more non-transitory computer readable storage media comprise instructions that, when executed by a processor, cause the processor to: obtain, from a hearing device, one or more electrophysiological measures of acoustic hearing associated with a recipient of the hearing device; obtain one or more hearing performance measures of the recipient; analyze the one or more electrophysiological measures with respect to the one or more hearing performance measures; and initiate at least one adjustment to operation of the hearing device based on the analyzing of the one or more electrophysiological measures with respect to the one or more hearing performance measures.
[0007] In another aspect, a hearing device system is provided. The hearing device system comprises: one or more sensors; a memory storing computer-readable instructions; and a processor configured to execute the computer-readable instructions to: perform one or more electrophysiological tests to obtain one or more evoked potentials or one or more impedances from a recipient of the hearing device; correlate the one or more evoked potentials or the one or more impedances with one or more hearing performance measures; and automatically provide one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] Embodiments of the present invention are described herein in conjunction with the accompanying drawings, in which:
[0009] FIG. 1A is a schematic diagram illustrating an electric-acoustic hearing device, in accordance with certain embodiments presented herein;
[ooio] FIG. IB is a block diagram of the electric -acoustic hearing device of FIG. 1 A;
[ooii] FIG. 1C is a block diagram of a recipient device, such as a mobile phone or computer, on which stimulation adjustment logic can be implemented, in accordance with certain embodiments presented herein;
[0012] FIG. 2 illustrates example graphical recipient interface (GUI) screens of a computing device for using Ecological Momentary Assessment (EMA) techniques to gather recipient input regarding subjective hearing ability, according to an example embodiment;
[0013] FIG. 3A is a schematic view representing an initial electric-acoustic stimulation (EAS) programming parameter set for acoustic stimulation and electric stimulation, respectively;
[0014] FIG. 3B is a schematic view representing a change from acoustic stimulation to electric- only stimulation in an affected region where a change in thresholds is detected, according to an example embodiment;
[0015] FIG. 3C is a schematic view representing increasing acoustic stimulation in the affected region where the change in thresholds is detected, according to an example embodiment;
[0016] FIG. 3D is a schematic view representing a change to electric-acoustic stimulation (with an optional increase in acoustic stimulation) in the affected region where the change in thresholds is detected, according to an example embodiment;
[0017] FIG. 4 is a conceptual diagram illustrating an example system in accordance with one or more aspects of this disclosure;
[0018] FIG. 5 is a flowchart of operations of a first method for enabling automated electricacoustic stimulation programming adjustments, according to an example embodiment; and
[0019] FIG. 6 is a flowchart of operations of a second method for enabling automated electricacoustic stimulation programming adjustments, according to an example embodiment.
DETAILED DESCRIPTION
Overview
[0020] Individuals suffer from different types of hearing loss (e.g., conductive and/or sensorineural) and/or different degrees/severity of hearing loss. However, it is now common for many recipients to retain some residual natural hearing ability (residual hearing) after receiving a hearing device. For example, progressive improvements in the design of intra- cochlear electrode arrays (stimulating assemblies), surgical implantation techniques, tooling, etc. have enabled atraumatic surgeries which preserve at least some of the recipient’s fine inner ear structures (e.g., cochlea hair cells) and the natural cochlea function, particularly in the lower frequency regions of the cochlea.
[0021] Due, at least in part, to the ability to preserve residual low-frequency acoustic hearing can during cochlear implant surgery, the number of recipients who are candidates for electricacoustic stimulation hearing devices (e.g., devices that deliver both electric and acoustic or mechanical stimulation, sometimes referred to herein as electric-acoustic stimulation (EAS)) has continued to expand. Typically, due to the limits of residual hearing, the acoustic stimulation is used to present sound signal components corresponding to the lower frequencies of sound signals, while the electrical stimulation is used to present sound signal components corresponding to the higher frequencies of sound signals.
[0022] Electric-acoustic stimulation hearing devices provide substantial benefits over electric- only hearing devices by preserving interaural time difference and temporal fine structure cues for improved sound localization, speech-in-noise performance, speech naturalness, and music perception. That is, recipients with residual hearing typically benefit from having acoustic stimulation in addition to electrical stimulation, because the acoustic stimulation adds a more “natural” sound to their hearing perception compared to electrical stimulation signals only in that ear. In particular, temporal coding of auditory signals is particularly important for low frequencies and is best achieved with acoustic stimulation. As such, combined electricacoustic stimulation benefits specifically from this low-frequency acoustic coding. For example, the addition of the acoustic stimulation can provide improved pitch and music perception and/or appreciation, as the acoustic signals can contain a more salient lower frequency (e.g., fundamental pitch, F0) representation than is possible with electrical stimulation. Other benefits of residual hearing can include, for example, improved sound
localization, binaural release from unmasking, the ability to distinguish acoustic signals in a noisy environment, etc.
[0023] On average, cochlear implantation causes a 20-30 dB shift in low-frequency acoustic hearing thresholds. However, the acoustic hearing changes experienced by different recipients vary widely and recipients can experience anything from total acoustic hearing loss to no acoustic hearing loss. Not only is the degree of acoustic hearing loss highly variable among recipients, but the acoustic hearing loss and can also arise at different times and occur at different rates. When acoustic hearing thresholds shift, the parameters of the electric-acoustic stimulation hearing device (sometimes referred to herein as the electric-acoustic (EAS) programming or electric-acoustic stimulation parameters) should also be adjusted to provide optimal electric -acoustic input.
[0024] In conventional arrangements, the adjustment of electric-acoustic stimulation parameters typically involves frequent in-person clinic visits where a professional perform pure-tone audiometry and manually adjusts the parameters. However, recipients often do not have ready access to clinics due to, for example, costs, lack of insurance coverage, low availability of trained audiologists, long distances from clinics, etc. Therefore, the need for multiple clinics for adjustments of electric-acoustic stimulation parameters can not only be cost prohibitive for certain recipients, but can also require the recipient to live with improper sound perceptions (possibly unknowingly) for significant periods of time.
[0025] Accordingly, presented herein are techniques for adjustment of electric-acoustic stimulation parameters of a medical device, such as an electric-acoustic hearing device, based on electrophysiological measures of acoustic hearing associated with a medical device recipient in combination with one or more hearing performance measures of the recipient. The one or more electrophysiological measures are analyzed with respect to the one or more hearing performance measures to set, adjust, determine, etc. (collectively and generally referred to herein as “set”) one or more electric -acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating.
[0026] For example, in certain examples, electrophysiological measures (e.g., one or more evoked potentials or one or more impedances from) obtained via one or more electrophysiological tests are correlated with one or more hearing performance measures to automatically set one or more electric-acoustic stimulation parameters of a hearing device. In certain examples, the techniques presented herein include a machine-learned clinical tool that
is configured to make predictions about acoustic hearing loss. In certain examples, the techniques presented herein monitor for device malfunction based on a relative analysis (e.g., comparison) of expected acoustic and electrically evoked responses.
[0027] In accordance with the techniques presented herein, electrophysiological measures that serve as correlates to “acoustic hearing thresholds” (e.g., various evoked potentials, impedances, etc.), as well as “hearing performance measures” (e.g., objective/behavioral performance, in-situ audiometry, subjective hearing ability, listening effort, and conversational engagement) can be obtained by an implantable medical device (including but not limited to cochlear implants). The electrophysiological measures serve as correlates to “acoustic hearing thresholds” (e.g., via evoked potentials and impedances) as well as “hearing performance measures” (e.g., objective performance, in-situ audiometry, subjective hearing ability, listening effort, and conversational engagement) based on recipient input, to automatically provide appropriate electric-acoustic stimulation programming adjustments without the need for an in- person clinic visit. As a result, the system and techniques described herein improve upon the conventional techniques that require frequent in-person clinic visits to optimize electricacoustic stimulation programming.
[0028] According to one aspect, the system and techniques described herein enable automated changes to electric-acoustic stimulation programming by: (1) monitoring electrophysiological measures/correlates of acoustic hearing (e.g., evoked potentials and impedances) and hearing performance measures (e.g., measures of objective performance, in-situ audiometry, subjective hearing ability, listening effort, and/or conversational engagement); (2) applying one or more of these metrics through a machine-learned model to infer changes to acoustic hearing thresholds and/or device deficiencies; and (3) automatically adjusting electric-acoustic stimulation programming parameters (and/or notifying professional of potential changes in hearing/device functionality so that corresponding adjustments can be made). For example, monitoring subjective hearing ability can involve obtaining recipient input via Ecological Momentary Assessment (EMA), as explained further below.
[0029] In this way, aspects of the techniques presented herein provide an automated clinical application of these electrophysiological metrics for electric-acoustic stimulation recipients to detect changes in acoustic hearing or device deficiencies and determine programming adjustments without the need for frequent in-person clinic visits or provide justification for a clinic visit.
[0030] It is to be appreciated that there are a number of different types of electronic devices in/with which the techniques presented herein can be implemented. Merely for ease of description, the techniques presented herein are primarily described with reference to a specific electronic device in the form of a cochlear implant system. However, it is to be appreciated that the techniques presented herein can also be partially or fully implemented by any of a number of different types of devices, including consumer electronic devices (e.g., mobile/wireless devices, wearables, computing devices, televisions, appliances/white goods, Intemet-of-Things (loT) devices, audio equipment, etc.), computing systems (e.g., servers in data centers), various types of software systems, such as databases, machine learning and artificial intelligence systems, other medical devices, diagnostic equipment, etc. For example, the techniques presented herein could be used in or with hearing devices, various implantable medical devices, such as vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc.
[0031] As used herein, the term “hearing device” is to be broadly construed as any device that acts on an actual or potential auditory perception of an individual, including to improve perception of sound signals, to reduce perception of sound signals, etc. . In particular, a hearing device can deliver sound signals to a user in any form, including in the form of acoustical stimulation, mechanical stimulation, electrical stimulation, etc., and/or can operate to suppress all or some sound signals. As such, a hearing device can be a device for use by a hearing- impaired person (e.g., hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electric-acoustic hearing devices, auditory brainstem stimulators, bimodal hearing prostheses, bilateral hearing prostheses, dedicated tinnitus therapy devices, tinnitus therapy device systems, combinations or variations thereof, etc.) or a device for use by a person with normal hearing (e.g., consumer devices that provide audio streaming, consumer headphones, earphones, and other listening devices), a hearing protection device, etc.
[0032] As noted, the techniques presented herein can be used with a number of different types of hearing devices that deliver electrical stimulation (current signals) alone or in combination with acoustic or mechanical stimulation, such as cochlear implants, auditory brainstem stimulators, tinnitus stimulators, bi-modal hearing prostheses, electric -acoustic hearing devices, etc. Therefore, as used herein, “acoustic stimulation” can refer to the delivery of
aided/amplified acoustic signals to the cochlea or to the delivery of unaided (natural) acoustic signals to the cochlea (i.e., reliance on natural hearing in the outer, middle, and/or inner ears).
Also as noted above, merely for ease of illustration, embodiments are primarily described herein with reference to one specific type of hearing device, namely an electric -acoustic hearing device comprising a cochlear implant portion and a hearing aid portion. Again, the techniques presented herein can be used with other types of hearing devices having different types of output devices.
Example System and Components
[0033] To deliver both electrical and acoustical (electric-acoustic) stimulation to a recipient, the recipient can be fitted with a medical device, such as an electric -acoustic hearing device. FIG. 1A is schematic diagram of such a medical device in the form of an exemplary electricacoustic hearing device 100 configured to implement embodiments of the present invention, while FIG. IB is a block diagram of the electric-acoustic hearing device 100. For ease of illustration, FIGs. 1A and IB will be described together.
[0034] The electric-acoustic hearing device 100 includes an external component 102 and an intemal/implantable component 104. The external component 102 is directly or indirectly attached to the body of the recipient and comprises a sound processing unit 110, an external coil 106, and, generally, a magnet (not shown in FIG. 1A) fixed relative to the external coil 106. The external coil 106 is connected to the sound processing unit 110 via a cable 134. The sound processing unit 110 comprises one or more sound input devices 108 (e.g., microphones, audio input ports, cable ports, telecoils, a wireless transceiver, etc.), a sound processor 112, an external transceiver unit (transceiver) 114, and a power source 116. The external component 102 and/or or the implantable component 104, or both, can include one or more functional components with which techniques described herein can be implemented. For example, the implantable component 104 includes a monitoring component 145 that can be configured to capture electrophysiological measures, as described elsewhere herein. The external component 102 includes functionality (e.g., wireless interface 147) to relay the captured electrophysiological measures to another device, such as user device 150 shown in FIG. 1C.
[0035] In the example of FIGs. 1A and IB, the sound processing unit 110 is a behind-the-ear (BTE) sound processing unit. However, in other embodiments, the sound processing unit 110 could be a body-worn sound processing unit, a button sound processing unit, an in-the-ear (ITE) unit, etc. Connected to the sound processing unit 110 (e.g., via a cable 135 or wireless
interface 147) is a component, sometimes referred to as a hearing aid component 141, that is configured to deliver acoustic stimulation to the recipient. To this end, the hearing aid component 141 includes a receiver 142 (FIG. IB) that can be, for example, positioned in or near the recipient’s outer ear. The receiver 142 is an acoustic transducer that is configured to deliver acoustic signals (acoustic stimulation) to the recipient via the recipient’s ear canal and middle ear.
[0036] FIGs. 1A and IB illustrate the use of a receiver 142 to deliver acoustic stimulation to the recipient. However, as noted above, it is to be appreciated that the acoustic stimulation can be delivered in a number of other manners. For example, other embodiments can include an external or implanted vibrator that is configured to deliver acoustic stimulation to the recipient. In still other embodiments, the hearing aid component 141 could be omitted and the recipient’s cochlea is acoustically stimulated using unaided acoustic signals provided to the recipient’s cochlea via the natural hearing path (i.e., via the functioning outer ear and middle ear). Acoustic stimulation can also be delivered using an in-the-ear hearing aid, controlled by the sound processor 112.
[0037] As shown in FIG. IB, the implantable component 104 comprises an implant body (main module) 122, a lead region 124, and an elongate intra-cochlear stimulating assembly 126. The implant body 122 generally comprises a hermetically-sealed housing 128 in which an internal transceiver unit (transceiver) 130 and a stimulator unit 132 are disposed. The implant body 122 also includes an intemal/implantable coil 136 that is generally external to the housing 128, but which is connected to the transceiver 130 via a hermetic feedthrough (not shown in FIG. IB). Implantable coil 136 is typically a wire antenna coil comprised of multiple turns of electrically insulated single-strand or multi-strand platinum or gold wire. The electrical insulation of implantable coil 136 is provided by a flexible molding (e.g., silicone molding), which is not shown in FIG. IB. Generally, a magnet is fixed relative to the implantable coil 136.
[0038] Elongate stimulating assembly 126 is configured to be at least partially implanted in the recipient’s cochlea 120 and includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 138 that collectively form a contact array 140 for delivery of electrical stimulation (current signals) to the recipient’s cochlea. In certain arrangements, the contact array 140 can include other types of stimulating contacts, such as optical stimulating contacts, in addition to the electrodes 138.
[0039] Elongate stimulating assembly 126 extends through an opening 121 in the cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit 132 via lead region 124 and a hermetic feedthrough (not shown in FIG. IB). Lead region 124 includes a plurality of conductors (wires) that electrically couple the electrodes 138 to the stimulator unit 132.
[0040] Returning to external component 102, the sound input device(s) 108 are configured to detect/receive sound signals and to generate electrical output signals therefrom. The sound processor 112 is configured to execute sound processing that converts the output signals received from the sound input device(s) into coded data signals that represent acoustical and/or electrical stimulation for delivery to the recipient. That is, as noted, the electric-acoustic hearing device 100 operates to evoke perception by the recipient of sound signals received by the sound input device(s) 108 through the delivery of one or both of electrical stimulation signals and acoustic stimulation signals to the recipient. As such, depending on the current operational settings (sometimes referred to as an operational “map”), the sound processor 112 is configured to convert the output signals received from the sound input device(s) into a first set of output signals representative of electrical stimulation and/or into a second set of output signals representative of acoustic stimulation. The output signals representative of electrical stimulation are represented in FIG. IB by arrow 115, while the output signals representative of acoustic stimulation are represented in FIG. IB by arrow 117.
[0041] The output signals 115 are provided to the transceiver 114. The transceiver 114 is configured to transcutaneously transfer the output signals 115, in an encoded manner, to the implantable component 104 via external coil 106. More specifically, the magnets fixed relative to the external coil 106 and the implantable coil 136 facilitate the operational alignment of the external coil 106 with the implantable coil 136. This operational alignment of the coils enables the external coil 106 to transmit the coded output signals 115, as well as power signals received from power source 116, to the implantable coil 136. In certain examples, external coil 106 transmits the encoded output signals 115 to implantable coil 136 via a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electromagnetic, capacitive, and inductive transfer, can be used to transfer the power and/or data from an external component to an electric -acoustic hearing device and, as such, FIG. IB illustrates only one example arrangement.
[0042] In general, the encoded output signals 115 are received at the transceiver 130 and provided to the stimulator unit 132. The stimulator unit 132 is configured to utilize the output
signals 115 to generate electrical stimulation (e.g., current signals) for delivery to the recipient’s cochlea via one or more stimulating contacts 138. In this way, electric -acoustic hearing device 100 electrically stimulates the recipient’s auditory nerve cells, bypassing absent or defective hair cells that normally transduce acoustic vibrations into neural activity, in a manner that causes the recipient to perceive one or more components of the received sound signals.
[0043] Also shown in FIG. IB is an EAS adjustment module 118. As will be explained in more detail below, EAS adjustment module 118 is operable to adjust electric-acoustic stimulation parameters (EAS programming) of the electric-acoustic hearing device 100. In certain examples, the EAS adjustment module 118 receives data/instructions from a separate user device 150 (FIG. 1C) that is configured to send communication signals or commands to adjust, for example, amplitudes/magnitudes, frequency ranges, etc. for the electrical stimulation and the acoustic stimulation delivered to the recipient via the electric-acoustic hearing device 100. Notably, and within certain parameters or frequency boundaries, this adjustment can be performed by the recipient with no or limited input from an audiologist or other hearing professional. In accordance with an embodiment, a recipient is able to, using intuitive controls on user device 150, self-adjust selected parameters of the electric-acoustic hearing device 100. For example, in certain embodiments, a recipient, via user device 150 and EAS adjustment module 118, can control the loudness of acoustic stimulation, a mix of the electric versus acoustic stimulation be delivered, etc.
[0044] FIGs. 1A and IB illustrate an arrangement in which the electric-acoustic hearing device 100 includes an external component 102. However, it is to be appreciated that embodiments of the present invention can be implemented in hearing devices having alternative arrangements. For example, embodiments of the present invention can be implemented in a totally implantable device, such as a totally implantable auditory prosthesis. A totally implantable auditory prosthesis is an auditory prosthesis in which all components are configured to be implanted under skin/tissue of a recipient. Because all components are implantable, a totally implantable auditory prosthesis is configured to operate, for at least a finite period of time, without the need of an external device. However, an external device can be used to, for example, charge an internal power source (battery) of a totally implantable auditory prosthesis.
[0045] As noted above, it is common for recipients to retain at least part of their normal hearing functionality (i.e., retain at least some residual hearing). Therefore, the cochlea of a recipient
can be acoustically stimulated upon delivery of an aided (or potentially unaided) acoustic signal to the recipient’s outer ear. In the example of FIGs. 1A and IB, the receiver 142 is used to aid the recipient’s residual hearing. More specifically, the output signals 117 (i.e., the signals representative of acoustic stimulation) are provided to the receiver 142. The receiver 142 is configured to utilize the output signals 117 to generate the acoustic stimulation signals that are provided to the recipient. In other words, the receiver 142 is used to enhance, and/or amplify a sound signal which is delivered to the cochlea via the middle ear bones and oval window, thereby creating a pressure wave in the perilymph within the cochlea.
[0046] As such, the electric-acoustic hearing device 100 of FIGs. 1A and IB is configured to deliver both acoustic stimulation and electrical stimulation (current signals) to a recipient. Acoustic stimulation combined with electrical stimulation is sometimes referred to herein as electro-acoustic stimulation. The electrical stimulation is generated from at least a first portion/segment (i.e., frequencies or frequency ranges) of the sound signals, while the acoustic stimulation signals are generated from at least a second portion of the sound signals. The recipient’s operational settings, which are determined and set during a fitting process, dictate how the electric -acoustic hearing device 100 operates to convert sound signals into the acoustic and/or electrical stimulation.
[0047] FIG. 1C is a block diagram of a user device 150, such as a mobile phone or computer, on which EAS monitoring logic 155 can be hosted, in accordance with certain embodiments presented herein. EAS monitoring logic 155 can be configured to present a graphical recipient interface to a recipient and to communicate with (e.g., send commands to) electric-acoustic hearing device 100 via EAS adjustment module 118 and, possibly, related controller.
[0048] In general, user device 150 can be a computing device that comprises a processor 152 or controller, a memory 154, a communication interface 156, and a recipient interface 158. Memory 154 can store EAS monitoring logic 155, the function of which is described below. Each of these components can be in communication with one another via a bus (not shown).
[0049] The processor 152 or controller is, for example, a microprocessor or microcontroller that executes instructions for the EAS monitoring logic 155. The processor 152 can execute EAS monitoring logic 155 to, for example, generate a user interface for a recipient of user device 150, determine optimal EAS programming for the electric-acoustic hearing device 100, generate, signals indicative of changes to the EAS programming of the electric-acoustic hearing device 100 for execution by the EAS adjustment module 118, etc. (e.g., execute the method(s)
500 and/or 600 described with reference to FIGs. 5 and/or 6). It should be appreciated that memory 154 can include other logic elements that, for ease of illustration, have been omitted from FIG. 1C.
[0050] Memory 154 can comprise read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory 154 can comprise one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (by the processor 152) it is operable to perform operations described herein.
[0051] Communication interface 156 can comprise, for example, any combination of network ports (e.g., Ethernet ports), wireless network interfaces, Universal Serial Bus (USB) ports, Institute of Electrical and Electronics Engineers (IEEE) 1394 interfaces, PS/2 ports, etc. In the example of FIGs. IB and 1C, communication interface 156 is connected to electric -acoustic hearing device 100 via wireless interface 147 (FIG. IB). Communication interface 156 can be directly connected to the electric -acoustic hearing device 100 or connected to an external device that is communication with the electric-acoustic hearing device 100. Communication interface 156 can be configured to communicate with electric-acoustic hearing device 100 via a wired or wireless connection 160 (e.g., to provide an indication of, or command for, an EAS programming change, such as a desired shift of a mixed stimulation window and/or acoustic stimulation volume) via EAS adjustment module 118.
[0052] The recipient interface 158 can include one or more output devices, such as a liquid crystal display (LCD) and a speaker, for presentation of visual or audible information to a clinician, audiologist, or, more relevantly, to the recipient. The recipient interface 158 can also comprise one or more input devices that include, for example, a keypad, keyboard, mouse, touchscreen, etc.
[0053] It is to be appreciated that the arrangement for user device 150 shown in FIG. 1C is illustrative and that the embodiments presented herein can include any combination of hardware, software, and firmware configured to perform the functions described herein. For example, the user device 150 can be a personal computer, handheld device (e.g., a tablet computer), a mobile device (e.g., a mobile phone), and/or any other electronic device.
[0054] In accordance with certain embodiments presented herein, EAS monitoring logic 155 is configured to generate a user interface that can be presented to a recipient on the user device 150 and receive recipient input. Examples of such a user interface is shown in FIG. 2.
Example Embodiments
[0055] As noted, low-frequency acoustic hearing can successfully be preserved after cochlear implant surgery, allowing for electric-acoustic stimulation (EAS). Also as noted above, EAS provides substantial benefits over electric-only hearing by preserving interaural time difference and temporal fine structure cues for improved sound localization, speech-in-noise performance, speech naturalness, and music perception. However, the optimal EAS programming for a hearing device recipient depends on the recipient’s available postoperative acoustic hearing (e.g., cochlear implantation can cause an average 20-30 dB shift in low-frequency thresholds, and the degree of loss, rates of loss, etc., is highly variable among recipients).
[0056] To address the above and other needs, example embodiments of the present disclosure provide a system and techniques for recipient-directed, automated, and/or partially-automated adjustment of the parameters of the electric-acoustic stimulation hearing device (i.e., adjustment of the EAS programming). That is, as noted, an electric-acoustic hearing device, such as electric-acoustic hearing device 100, delivers both acoustic stimulation and electrical stimulation. For lower frequencies, only acoustic stimulation can be delivered to the recipient, and for higher frequencies, only electrical stimulation can be delivered to the recipient. More specifically, no electrical stimulation is delivered below a first frequency, referred to as the “high-frequency” or “electric” cutoff frequency, and no acoustic stimulation is delivered above a second frequency, referred to as the “low-frequency” or “acoustic cutoff’ frequency. A region between the high-frequency cutoff frequency and the low-frequency cutoff frequency, namely a region where both acoustic stimulation and electrical stimulation can be simultaneously delivered, can be referred to as an overlap region. In some cases, it can be desirable to have a minimum overlap region, so as to minimize overlap between electric and acoustic stimulation. However, in order to avoid gaps in therapy, a transition can be implemented so as to provide for a smooth and monotonic transition of loudness perception. That is, the level of acoustic stimulation and/or electrical stimulation can be gradually reduced as frequencies change between the high-frequency cutoff frequency and the low-frequency cutoff frequency, and vice versa.
[0057] The frequency at which a level of acoustic stimulation and a level of electrical stimulation intersect can be referred to as a “crossover frequency,” designated at frequency Fcross. The crossover frequency need not be equidistant from frequency Fl and frequency F2, but can be so. The crossover frequency will depend, effectively, and ultimately, on the recipient’s hearing ability or perception over a range of frequencies that electro-acoustic hearing device 100 processes.
[0058] As used herein, adjustment to the parameters of the electric -acoustic stimulation hearing device (i.e., adjustment of the EAS programming) can refer to adjustment of the crossover frequency, attributes of the overlap region, magnitudes/amplitudes of the electrical and acoustic stimulation, and/or other parameters.
A, Electrophysiological Measures
[0059] The system described herein is configured to assess electrophysiological measures that are correlated to acoustic hearing thresholds. These electrophysiological measures can include, but are not limited to, evoked potentials and/or impedances. Evoked potentials can include neural responses (e.g., acoustically-evoked neural responses/potentials, such as Electrocochleography (ECochG) responses, or electrically-evoked neural responses/potentials obtained via, for example, a telemetry system), etc. For example, an ECochG response is a type of acoustically-evoked compound action potential (acoustically-evoked potential) measured using the intracochlear electrodes of the cochlear implant to record the response from the distal portion of the auditory nerve to a frequency-specific acoustic stimulus presented via the system’s acoustic component. Frequency-specific ECochG responses can be elicited by varying the frequency of the acoustic signal presented. This threshold response can be analyzed to represent the Cochlear Microphonic (CM) of the outer hair cells, which has been shown to correlate with audiometric thresholds.
[0060] Electrical neural responses (electrically-evoked potentials) are a type of electrically evoked compound action potential measured using the intracochlear electrodes of the cochlear implant to record the response from the distal portion of the auditory nerve to an electric stimulus from a given intracochlear electrode. Frequency-specific responses can be elicited by providing the stimulation from different intracochlear electrodes of varying location (i.e., more basal electrodes corresponding to higher frequencies and more apical electrodes corresponding to lower frequencies). Different neural response measures (e.g., thresholds, growth function slopes, suprathreshold amplitudes, etc.) have not been shown to directly correlate with
behavioral thresholds; however, it has been hypothesized that neural responses can be made more useful when used in tandem with other objective measures.
[0061] Impedances can refer to Common Ground Impedances, Monopolar Impedances (e.g., MP1, MP2), Four Point Impedances, or Time Varying Impedances (e.g., Trans Impedance Matrix (TIM)), for example. In general, cochlear implant impedances represent the resistance to flow of current between two electrodes or groups of electrodes. Systems presented herein allow for simultaneous stimulation and recording in various configurations as detailed below. Impedance is derived from the known, applied current (I) and the measured voltage (V) using Ohm’s law (V = I x R).
[0062] “Common Ground Impedance”: In the common ground mode, current is applied and voltage is measured between a single intracochlear electrode and all other intracochlear electrodes shorted together. An increase in common ground impedance averaged across electrodes is coincident with a delayed shift in hearing thresholds, whereas stable averaged common ground impedance is associated with stable hearing thresholds. Common ground impedance can be measured along the electrode array to provide frequency-specific information reflecting tissue growth and health along the cochlea.
[0063] “Monopolar Impedance” (MP1 or MP2): This impedance measurement mode stimulates and records from an individual intracochlear electrode that is grounded to the two extracochlear grounds (i.e., the pin and case grounds). This is measured at one time point at the end of the stimulating pulse, which encapsulates all elements of impedance. It can be measured for each individual electrode which can provide insight into frequency-specific changes in acoustic hearing thresholds.
[0064] “Four Point Impedance”: Four-point impedance is measured by utilizing four adjacent intracochlear electrodes and applying current between the two outer electrodes while measuring the voltage differential between the inner two electrodes. The four adjacent electrodes can run from basal to apical ends of the electrode array to provide frequency-specific electrophysiological information along the cochlea. An increase in total four-point impedances is associated with cochlea bleeding/inflammation and fibrosis development that can lead to delayed increases in hearing thresholds following cochlear implantation. In general, four-point impedances have been found to rise within 24 hours of cochlear implantation and 3 months postoperatively, particularly in the basal region, aligning with the natural timeline of acute and chronic inflammatory responses. Individual inflammatory response times can vary across
individual recipients. Thus, an increase in postoperative four-point impedances could be preceding or occurring concurrently with an increase in acoustic hearing thresholds.
[0065] “Time Varying Impedance” / “Trans Impedance Matrix” (TIM): Trans impedance matrices are measured in the same mode as monopolar (MP1+2) described above. It expands the information collected by assessing impedance at numerous time points during the pulse. This allows the impedance measures to be analyzed into sub-components of “access impedance” and “polarization impedance.” Increasing access impedance and stable polarization impedance is associated with hearing threshold changes, which indicates the utility of this measure as a biomarker for acoustic hearing changes.
[0066] The recording of electrophysiological measures can start intraoperatively, and can continue to be taken postoperatively. Postoperative electrophysiological measurements can be manually activated (e.g., via a button press by the recipient, or remotely by a clinician), can be triggered by an event (e.g., a decrease in hearing performance or wear time), or can occur at standard or customized intervals (e.g., every day at a certain time). The timing of certain electrophysiological events/changes relative to cochlear implantation can be indicative of different intracochlear and behavioral threshold changes.
B, Hearing Performance Measures
[0067] In addition, the system described herein is configured to assess one or more hearing performance measure (“performance measures”) correlated to acoustic hearing thresholds. The hearing performance measures that can include but are not limited to: objective or behavioral performance, in-situ audiometry, subjective hearing ability, listening effort, and/or conversational engagement, for example.
[0068] “Objective or behavioral performance”: Objective hearing performance measures/metrics can be collected through self-administered, automated testing, such as the digit triplet test administered through a smartphone application.
[0069] “In-situ audiometry”: In-situ audiometry could be measured using acoustic or electric stimuli presented by the cochlear implant device through self-administered, automated testing, such as with a smartphone application.
[0070] “Subjective hearing ability”: FIG. 2 depicts example graphical recipient interface (GUI) screens 210 and 220 displayed on a recipient interface 158 of a user device 150 for using Ecological Momentary Assessment (EMA) to gather recipient input regarding subjective changes in hearing (GUI screen 210) and/or subjective hearing ability (GUI screen 220),
according to example embodiments. In one example, GUI screen 210 displays an output/prompt 212 (e.g., “Have you noticed a decline in your hearing over the past month?” or the like) and an input/response 214 (e.g., “Yes” or “No,” or the like). In another example, GUI screen 220 displays an output/prompt 222 (e.g., “Please rate your overall ability to hear in the past week” or the like) and an input/response 224 (e.g., “Excellent,” “Good,” “Acceptable,” “Poor,” “Very Poor,” or the like).
[0071] The example GUI screens 210 and 220 of FIG. 2 are illustrative in nature and nonlimiting, and various other suitable outputs/prompts and/or inputs/responses are also possible within the scope of the present disclosure. In some other examples, EMA responses (input) could be collected from recipients via push notifications (output) delivered by a smart accessory (e.g., such as a smartphone, a smartwatch, a smart appliance, etc.). In some other examples, an EMA question or prompt (output) can be presented auditorily to the individual, who can then record their EMA response (input) verbally. For example, subjective ratings can be performed using a Likert scale, visual analogue scale, free text field, or the like.
[0072] “Listening effort”: Listening effort can also be a measure of hearing performance. Listening effort can be monitored during device use using EMA (as with subjective hearing ability above), or can be monitored through psychophysiological biomarkers. The system can physiologically monitor listening effort biomarkers during speech using a sensor package consisting of one or more of the following sensors: microphone(s), photoplethysmography (PPG) sensor(s), electrooculography (EOG) sensor(s), electrocardiography (ECG) sensor(s), temperature sensor(s), electromyography (EMG) sensor(s), inertial measurement unit (IMU) sensor(s), electroencephalography (EEG) sensor(s), functional near-infrared spectroscopy (fNIRS) sensor(s), blood pressure sensor(s), respiration rate sensor(s), and/or galvanic skin response (GSR) sensor(s).
[0073] The following physiological changes can be indicative of an acute stress response consistent with an increase in listening effort (with the sensor(s) that are used to detect the biomarker change given in parentheses): increased vocal fundamental frequency, speaking rate (i.e., increase in high-frequency modulation amplitude), and/or root-mean square amplitude (microphone combined with Own-Voice Detection); increased respiration rate (ECG, PPG, respiration rate sensor); decreased heart rate variability (ECG, PPG); increased heart rate (ECG, PPG); increased skin conductance (GSR); increased blood pressure (ECG, PPG, blood pressure sensor); increased prefrontal cortex oxygenation (fNIRS); increased pupil dilation (EOG); increased core temperature (temperature sensor); decreased alpha oscillatory (-8-13
Hz) power (EEG); changes in motion such as occurring from nervous habits (IMU, EMG); etc., any of which could be implemented as part of monitoring component 145 (FIG. IB).
[0074] “Conversational engagement”: Conversational engagement (i.e., whether the recipient can hear and actively participate in conversation) can be another measure of hearing performance. The system can detect when a recipient is in a conversational environment based on the detection of speech (e.g., speech in quiet or speech in noise) using automatic environmental classification techniques. In these environments, the system can further detect the recipient’s conversational engagement using Own-Voice Detection (OVD), for example. The OVD input to the microphone can be compared to non-OVD speech input to examine whether the recipient is engaging in normal conversational turn-taking. In one example embodiment, linguistic analyses of the OVD signal can be additionally applied to verify that the recipient’s speech is content-based rather than non-content based, such as by requesting repetition or saying “what?” for example.
[0075] Declines in hearing performance (e.g., decreased objective performance, increased in- situ audiometry, decreased subjective hearing, increased listening effort, and/or decreased conversational engagement) over time could be used to trigger the execution of machine- learned model(s) to detect hearing changes or device dysfunction, and/or the hearing performance metrics could be input to the machine-learned model(s), as described further below. The recipient input on subjective hearing performance can also be a valuable rehabilitation tool for recipient s and medical professionals.
C. Machine-learned model
[0076] In some example embodiments, the system can utilize a machine -learned model to classify acoustic hearing and/or to classify device deficiencies. The machine-learned model can be part of EAS monitoring logic 155 of user device 150 of FIG. 1C, for example.
[0077] Classifying acoustic hearing: One or more of these electrophysiological measures, including their timing relative to cochlear implantation, and one or more of these hearing performance measures (i.e., objective performance, in-situ audiometry, subjective hearing, listening effort, and/or conversational engagement) can be input to a machine learning classification model, such as a deep neural network, a -means clustering model, a support vector machine, or another type of machine-learned model, to infer changes in frequencyspecific acoustic hearing thresholds.
[0078] In one example embodiment, an initial or default machine learning classification model can be based on a single population dataset of hundreds, thousands, or more. In an alternative example embodiment, the classification model can start with a characteristic-specific or demographic-specific classification model including recipient data that could inform the model, including but not limited to age, sex, race, duration of hearing loss, duration of severe to profound hearing loss, onset of hearing loss, audiometric configuration, preoperative acoustic thresholds, etiology, electrode array, comorbidities, hearing aid use, insertion approach, steroid regimen, and/or scalar location. The classification model would take electrophysiological measures and/or hearing performance measures and classify whether the recipient has or has not experienced an acoustic threshold shift, the affected frequency region(s), and/or the degree of acoustic threshold shift.
[0079] In addition, this machine learning algorithm can be further improved over time based on confirmation of changes in acoustic thresholds measured by pure-tone audiometry conducted at in-clinic visits. For example, changes in frequency-specific acoustic thresholds collected in the clinic at 3 months could be used to inform what prior electrophysiological measures were indicative of. This information could be added to the training data set used to improve the initial or default machine learning classification model for other recipients or their own personalized/custom model.
[0080] Classifying device deficiencies: Machine learning can also be used to categorize functionality of the acoustic component. The acoustic component can experience a device deficiency, such as cerumen blockage or demagnetization of the receiver, which would prevent the acoustic component from providing appropriate acoustic stimulation. In that event, electrophysiological responses to acoustic stimuli (i.e., ECochG) would be eliminated or reduced, while electrophysiological responses to electric stimuli would be present and incongruent with the acoustic responses.
[0081] A device deficiency impacting the acoustic component output can also elicit a change in hearing performance (e.g., decreased objective performance, increased in-situ audiometry, decreased subjective hearing, increased listening effort, and/or decreased conversational engagement). The classification model would take electrophysiological measures and/or factor in hearing performance measures to a pattern recognition/matching algorithm to detect whether the recipient can be experiencing a device deficiency of the acoustic component.
[0082] Combining neural responses (e.g., acoustically-evoked neural responses/potentials, electrically evoked potentials) and/or a plurality of other physiological signals (e.g., impedance data) with a machine learning approach will provide a more robust estimation of hearing ability to improve the accuracy of a system for detecting hearing changes. The machine learning approach can have greater resolution (e.g., more than three severity ratings, such as mild/moderate/severe) by using machine learning and specifying frequency-specific changes to train the model. In addition, combining neural response information (e.g., evoked potentials and/or impedance data) with recipient input-based hearing performance measures (e.g., recipient input via EMA) can further improve system accuracy by improving or enhancing the ability of the system to detect hearing threshold changes and/or device deficiencies.
D. Electric-Acoustic Stimulation (EAS) Programming Adjustments
[0083] If the system detects a change in frequency-specific acoustic hearing thresholds, then it can perform automatic adjustments to the output, high-frequency cutoff, and/or low-frequency cutoff parameters of the electric-acoustic stimulation. In some examples, the system can provide a mechanism to determine threshold levels for acoustic stimulation and/or electric stimulation, and/or to determine upper limits and crossover frequency for EAS.
[0084] FIG. 3 A is a schematic view 310 representing an initial EAS programming parameter set for acoustic stimulation 312 and electric stimulation 314 of an electrode array 140. FIGs. 3B-3D illustrate different EAS programming adjustments for the electrode array 140 in accordance with various example embodiments described herein.
[0085] The automatic adjustments are made when there is a detected change in acoustic hearing thresholds from a baseline state (FIG. 3A), and can include, but are not limited to: (1) change to electric-only stimulation in the affected region (FIG. 3B), (2) increase acoustic stimulation in the affected region (FIG. 3C), and/or (3) change to electric acoustic stimulation in the affected region (FIG. 3D).
[0086] FIG. 3B is a schematic view 320 representing a change from acoustic stimulation 322 to electric -only stimulation 324 in the affected region 326 where an increase in thresholds is detected, according to one example embodiment. As shown in FIG. 3B, the electric stimulation low-frequency cutoff will be lowered to include the affected region to provide electric stimulation coverage in the area where acoustic hearing can have been lost, and the acoustic high-frequency cutoff will be lowered to exclude the affected region to reduce superfluous amplification. Audibility and comfortability of electric-only stimulation parameters in the
affected region can be verified by subjective recipient input and/or electrically evoked stapedial reflex thresholds, for example. Subjective recipient input could be entered on an accessory device, such as smartphone, manual button presses on the sound processor, or verbally and picked up by system microphone (s).
[0087] FIG. 3C is a schematic view 330 representing acoustic stimulation 332 with an increase in acoustic stimulation output 333 in the affected region 336 where an increase in thresholds is detected (while the parameters for electric stimulation 334 remain unchanged), according to another example embodiment. As shown in FIG. 3C, the electric and acoustic frequency cutoffs will be maintained, but the acoustic output will be increased in the affected region proportionate to the inferred acoustic threshold shift to maintain gain-response target matching. This would be appropriate when the low-frequency acoustic hearing thresholds are still audible and likely to benefit from amplification despite the shift (< 80 dB HL). Audibility and comfortability of acoustic output level increases in the affected region can be verified by subjective recipient input and/or acoustically evoked stapedial reflex thresholds, for example.
[0088] FIG. 3D is a schematic view 340 representing a change to electric-acoustic stimulation (both acoustic stimulation 342 and electric stimulation 344 concurrently) in the affected region 346 where an increase in thresholds is detected, with an (optional) increase in acoustic stimulation output 343 (as represented by a dashed line in FIG. 3D), according to another example embodiment. As shown in FIG. 3D, the electric stimulation low-frequency cutoff will be lowered to include the affected region, and the acoustic high-frequency cutoff will be maintained to support dual encoding in case electrophysiological changes are not accompanied by acoustic threshold change. In this embodiment, the acoustic output can also be increased in the affected region to compensate for loss, as previously described. Audibility and comfortability of acoustic and electric output parameters in the affected region can be verified, as previously described.
E, Networking of Electrophysiological/Hearing Information
[0089] FIG. 4 is a conceptual diagram illustrating an example system 400 in accordance with one or more aspects of the present disclosure. The system 400 includes a local portion 420, a remote portion 440, and a data communication network 189 (e.g., a wide area network (WAN), etc.). The local portion 420 includes one or more external components 102(A), 102(B) (e.g., one or more sound processing units 110(A), 110(B) and one or more external coils 106(A), 106(B)), a user device 150 (e.g., a smart phone) with which a hearing device recipient (not
shown) can interact, and a wireless gateway/router 422 that is connected with the data communication network 189 (e.g., WAN) and configured to provide a local network (e.g., a local area network (LAN), etc.) for the user device 150. As noted, the external components 102(A), 102(B) include EAS adjustment module (controller) 118, and the user device 150 can be configured to execute EAS monitoring logic 155 according to the techniques described herein. The remote portion 440 includes a computing device 442 with which a medical professional 444 can interact and that is also connected with the data communication network 189. The user device 150 in the local portion 420 can communicate with the computing device 442 in the remote portion 440 over the data communication network 189 via the wireless gateway/router 422 (e.g., Wi-Fi connection, Bluetooth connection, etc.) in one example, or a cellular network 430 (e.g., 4G/LTE, 5G, next gen, etc. connection) in another example. Example embodiments are not limited to these devices, network technologies, or connection interfaces, and various other examples are also possible.
In one example embodiment, data about electrophysiological measurements, performance measures, inferred device deficiencies, and/or inferred acoustic threshold changes can be relayed to the professional 444 (e.g., an audiologist) via the computing device 442 at the remote portion 440 through the data communication network 189. As shown in FIG. 4, the professional 444 can use the computing device 442 to see and interact with the information in real time or retrospectively. In this way, the electrophysiological information could be used to justify an in-person clinic visit to confirm change in acoustic thresholds, device deficiencies, and confirm or perform required EAS programming changes, among various other uses.
F, Example Processes
[0090] FIG. 5 is a flowchart of operations of a first method 500 for enabling automated electricacoustic stimulation programming adjustments, according to an example embodiment. For ease of illustration, the method 500 of FIG. 5 will generally be described with reference to the electric-acoustic hearing device 100 of FIGs. 1A-1B, the user device 150 of FIGs. 1C and 2, and the system 400 of FIG. 4.
[0091] At operation 510, method 500 includes obtaining one or more electrophysiological measures of acoustic hearing associated with a hearing device recipient. At operation 520, method 500 includes obtaining one or more hearing performance measures of the recipient. At operation 530, method 500 includes analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures. At operation 540, method 500
includes generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures.
[0092] In some examples, obtaining the one or more electrophysiological measures of acoustic hearing associated with the hearing device recipient (operation 510) includes obtaining results of one or more evoked potential measurements, or obtaining results of one or more impedance measurements. In some other examples, both the results of one or more evoked potential measurements and the results of one or more impedance measurements can be obtained at operation 510.
[0093] In some examples, obtaining the one or more hearing performance measures of the recipient (operation 520) includes obtaining one or more hearing performance measures of the recipient based on recipient-provided feedback. In some examples, obtaining the one or more hearing performance measures of the recipient (operation 520) includes obtaining one or more objective hearing performance measures of the recipient. In some examples, obtaining the one or more hearing performance measures of the recipient (operation 520) includes one or more of determining one or more measures of hearing performance, determining one or more measures via in-situ audiometry, determining one or more measures of subjective hearing ability, determining one or more measures of listening effort, or determining one or more measures of conversational engagement.
[0094] In some examples, analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures (operation 530) includes applying a machine-learned model to detect a potential change in acoustic hearing of the recipient. In some examples, the machine-learned model can also be utilized to classify the potential change in acoustic hearing of the recipient at operation 530.
[0095] In some examples, analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures (operation 530) includes applying a machine-learned model to detect a potential change in operation of a hearing device of the recipient. In some examples, the machine-learned model can also be utilized to classify the potential change in operation of the hearing device at operation 530.
[0096] In some examples, generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures (operation 540) includes generating an output representing one or more electric-acoustic stimulation (EAS) programming adjustments for a hearing device of the recipient responsive
to determining, based on the analyzing, at least one of a potential change in acoustic hearing of the recipient or a potential change in operation of the hearing device.
[0097] In some examples, generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures (operation 540) includes automatically outputting a notification responsive to determining, based on the analyzing, at least one of a potential change in acoustic hearing of the recipient or a potential change in operation of the hearing device.
[0098] FIG. 6 is a flowchart of operations of a second method 600 for enabling automated electric-acoustic stimulation programming adjustments, according to an example embodiment. For ease of illustration, the method 600 of FIG. 6 will generally be described with reference to the electric-acoustic hearing device 100 of FIGs. 1A-1B, the user device 150 of FIGs. 1C and 2, and the system 400 of FIG. 4.
[0099] At operation 610, method 600 includes performing one or more electrophysiological tests to obtain one or more evoked potentials or one or more impedances from a hearing device recipient. At operation 620, method 600 includes correlating the one or more evoked potentials or impedances with one or more hearing performance measures. At operation 630, method 600 includes automatically providing one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating.
[ooioo] In some examples, obtaining the one or more evoked potentials (operation 610) includes obtaining one or more electrocochleography (ECochG) measurements, or obtaining one or more neural response measurements, or obtaining a combination thereof. In some examples, obtaining the one or more impedances (operation 610) includes obtaining one or more common ground impedance measurements, one or more monopolar impedance measurements, one or more four-point impedance measurements, one or more time-varying impedance measurements, or a combination thereof.
[ooioi] In some examples, operation 620 includes obtaining the one or more hearing performance measures based on recipient input received from the recipient of the hearing device. In some examples, operation 620 includes obtaining one or more measures of objective hearing performance, obtaining one or more measures via in-situ audiometry, obtaining one or more measures of subjective hearing ability, obtaining one or more measures of listening effort, obtaining one or more measures of conversational engagement, or a combination thereof. In some examples, obtaining one or more measures of subjective hearing ability includes
obtaining recipient input relating to subjective hearing ability via Ecological Momentary Assessment (EMA).
[00102] In some examples, method 600 can further include determining whether a change in acoustic hearing has occurred based on the correlating at operation 620, and automatically providing the one or more electric -acoustic stimulation (EAS) programming adjustments for the hearing device at operation 630 responsive to determining that the change in acoustic hearing has occurred.
[00103] In some examples, method 600 can further include determining whether a change in device functionality has occurred based on the correlating at operation 620, and automatically providing the one or more electric -acoustic stimulation (EAS) programming adjustments for the hearing device at operation 630 responsive to determining that the change in device functionality has occurred.
[00104] In some examples, method 600 can further include determining the one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating of the one or more evoked potentials or impedances with one or more acoustic hearing thresholds and with the one or more hearing performance measures at operation 620.
[00105] In some examples, correlating the one or more evoked potentials or impedances with one or more acoustic hearing thresholds and with one or more hearing performance measures (operation 620) includes applying a machine-learned model to the one or more evoked potentials or impedances and the one or more hearing performance measures to infer changes to one or more of acoustic hearing thresholds and/or device deficiencies. In some examples, the machine-learned model can also be used to determine the one or more electric -acoustic stimulation (EAS) programming adjustments for the hearing device responsive to inferring the changes to the one or more acoustic hearing thresholds and/or the device deficiencies.
[00106] In some examples, method 600 can further include determining that a potential change in acoustic hearing has occurred based on correlating the one or more evoked potentials or impedances with one or more acoustic hearing thresholds and the one or more hearing performance measures at operation 620. In some examples, a machine-learned model can be used to classify the potential change in acoustic hearing.
[00107] In some examples, method 600 can further include determining that a potential change in device functionality has occurred based on correlating the one or more evoked potentials or impedances with one or more acoustic hearing thresholds and with the one or more hearing
performance measures at operation 620. In some examples, a machine-learned model can be used to classify the potential change in device functionality.
[00108] In some examples, method 600 can further include notifying the recipient, a clinician, a caregiver, or a designated medical professional regarding potential changes in acoustic hearing and/or device functionality based on the correlating.
[00109] Thus, the system described above with reference to the example embodiments of FIGs. 1A-1C and 4 and the corresponding techniques described above with reference to the example embodiments of FIGs. 2, 3A-3D, 5 and 6 are configured to combine objective measures and subjective feedback to adjust the settings of an electric-acoustic stimulation (EAS) device, including but not limited to adjustments to the acoustic stimulation, adjustments to the electric stimulation, or adjustments to both types of stimulation. Some examples provide a self- adjustive device that utilizes a combination of objective measures (e.g., evoked potentials, impedances) and subjective measures (e.g., obtained via Ecological Momentary Assessment (EMA), etc.) with machine learning for automation of the analysis of the objective and subjective measures and adaptation of electric-acoustic stimulation parameters based on the machine learning analysis.
[oono] According to example embodiments described herein, providing automated postoperative hearing and device deficiency tracking and EAS programming adjustments based on advanced electrophysiological measures and recipient input-based performance measures (i.e., objective performance, in-situ audiometry, subjective hearing ability, listening effort, and/or conversational engagement) can help to curtail the occurrence of superfluous postoperative clinic visits, thereby reducing recipient and clinic burdens. In turn, this will positively influence the uptake of EAS fittings. With advances in drug -eluting dexamethasone electrode arrays, minimally traumatic surgical techniques, and expansion of audiometric indications, EAS for cochlear implant recipients with acoustic hearing preservation is an increasingly important area of growth. The present invention could also be a valuable rehabilitation tool for recipient s and professionals who avoid fitting EAS due to not wanting to bother with the acoustic component, because it can be prone to device deficiencies/malfunctions .
Variations and Alternatives
[ooni] It is also to be appreciated that aspects of techniques presented herein could be implemented in a number of different devices. For example, the techniques presented could
be implemented in a device comprising an in-the-ear (ITE) component operating as a hearing aid and an implant operating as a cochlear implant (e.g., mostly implantable cochlear implant). In such examples, two processors are operating and either or both could be adjusted/set based on the techniques presented herein. In particular, there is a processor in the implantable component for use in delivering electrical stimulation and a processor in the ITE component for a for use in delivering acoustic stimulation. A microphone can be provided in the ITE component which sends microphone signals as input to the acoustic stimulation processor (in the ITE component), and as input signals to the electrical stimulation processor (in the implant) via sending a wireless data to the implant.
[00112] Also as noted above, embodiments of the present invention have been described herein with reference to one specific type of hearing device, namely an electric-acoustic hearing device comprising a cochlear implant portion and a hearing aid portion. However, it is to be appreciated that the techniques presented herein can be used with other types of hearing prostheses, such as bi-modal hearing prostheses, electric-acoustic hearing device comprising other types of output devices (e.g., auditory brainstem stimulators, direct acoustic stimulators, bone conduction devices, etc.), tinnitus stimulators, etc.
[00113] As should be appreciated, while particular uses of the technology have been illustrated and discussed above, the disclosed technology can be used with a variety of devices in accordance with many examples of the technology. The above discussion is not meant to suggest that the disclosed technology is only suitable for implementation within systems akin to that illustrated in the figures. In general, additional configurations can be used to practice the processes and systems herein and/or some aspects described can be excluded without departing from the processes and systems disclosed herein.
[00114] This disclosure described some aspects of the present technology with reference to the accompanying drawings, in which only some of the possible aspects were shown. Other aspects can, however, be embodied in many different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible aspects to those skilled in the art.
[00115] As should be appreciated, the various aspects (e.g., portions, components, etc.) described with respect to the figures herein are not intended to limit the systems and processes to the particular aspects described. Accordingly, additional configurations can be used to
practice the methods and systems herein and/or some aspects described can be excluded without departing from the methods and systems disclosed herein.
[00116] According to certain aspects, systems and non-transitory computer readable storage media are provided. The systems are configured with hardware configured to execute operations analogous to the methods of the present disclosure. The one or more non-transitory computer readable storage media comprise instructions that, when executed by one or more processors, cause the one or more processors to execute operations analogous to the methods of the present disclosure.
[00117] Similarly, where steps of a process are disclosed, those steps are described for purposes of illustrating the present methods and systems and are not intended to limit the disclosure to a particular sequence of steps. For example, the steps can be performed in differing order, two or more steps can be performed concurrently, additional steps can be performed, and disclosed steps can be excluded without departing from the present disclosure. Further, the disclosed processes can be repeated.
[00118] Although specific aspects were described herein, the scope of the technology is not limited to those specific aspects. One skilled in the art will recognize other aspects or improvements that are within the scope of the present technology. Therefore, the specific structure, acts, or media are disclosed only as illustrative aspects. The scope of the technology is defined by the following claims and any equivalents therein.
[00119] It is also to be appreciated that the embodiments presented herein are not mutually exclusive and that the various embodiments can be combined with another in any of a number of different manners.
Claims
1. A method comprising : obtaining one or more electrophysiological measures of acoustic hearing associated with a hearing device recipient; obtaining one or more hearing performance measures of the recipient; analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures; and generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures.
2. The method of claim 1, wherein obtaining the one or more electrophysiological measures of acoustic hearing associated with the hearing device recipient comprises: obtaining results of one or more evoked potential measurements.
3. The method of claim 1, wherein obtaining the one or more electrophysiological measures of acoustic hearing associated with the hearing device recipient comprises: obtaining results of one or more impedance measurements.
4. The method of claim 1, wherein obtaining the one or more electrophysiological measures of acoustic hearing associated with the hearing device recipient comprises: obtaining results of one or more evoked potential measurements; and obtaining results of one or more impedance measurements.
5. The method of claim 1, wherein obtaining the one or more hearing performance measures of the recipient comprises: obtaining one or more hearing performance measures of the recipient based on recipient-provided feedback.
6. The method of claim 1, wherein obtaining the one or more hearing performance measures of the recipient comprises: obtaining one or more objective hearing performance measures of the recipient.
7. The method of claim 1, wherein obtaining the one or more hearing performance measures of the recipient comprises one or more of: determining one or more measures of hearing performance; determining one or more measures via in-situ audiometry; determining one or more measures of subjective hearing ability; determining one or more measures of listening effort; or determining one or more measures of conversational engagement.
8. The method of claim 1, 2, 3, 4, 5, 6, or 7, wherein analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures comprises: applying a machine-learned model to detect a potential change in acoustic hearing of the recipient.
9. The method of claim 8, further comprising: utilizing the machine-learned model to classify the potential change in acoustic hearing of the recipient.
10. The method of claim 1, 2, 3, 4, 5, 6, or 7, wherein analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures comprises: applying a machine-learned model to detect a potential change in operation of a hearing device of the recipient.
11. The method of claim 10, further comprising: utilizing the machine-learned model to classify the potential change in operation of the hearing device.
12. The method of claim 1, 2, 3, 4, 5, 6, or 7, wherein generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures comprises: generating an output representing one or more electric-acoustic stimulation (EAS) programming adjustments for a hearing device of the recipient responsive to determining,
based on the analyzing, at least one of a potential change in acoustic hearing of the recipient or a potential change in operation of the hearing device.
13. The method of claim 1, 2, 3, 4, 5, 6, or 7, wherein generating an output based on the analyzing the one or more electrophysiological measures with respect to the one or more hearing performance measures comprises: automatically outputting a notification responsive to determining, based on the analyzing, at least one of a potential change in acoustic hearing of the recipient or a potential change in operation of the hearing device.
14. A method comprising: performing one or more electrophysiological tests to obtain one or more evoked potentials or one or more impedances from a hearing device recipient; correlating the one or more evoked potentials or impedances with one or more hearing performance measures; and automatically providing one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating.
15. The method of claim 14, wherein obtaining the one or more evoked potentials comprises one or more of: obtaining one or more acoustically-evoked potential measurements; or obtaining one or more electrically-evoked potential measurements.
16. The method of claim 14, wherein obtaining the one or more impedances comprises one or more of: obtaining one or more common ground impedance measurements, obtaining one or more monopolar impedance measurements, obtaining one or more four-point impedance measurements, or obtaining one or more time-varying impedance measurements.
17. The method of claim 14, 15, or 16, further comprising:
obtaining the one or more hearing performance measures based on recipient input received from the recipient of the hearing device.
18. The method of claim 17, wherein obtaining the one or more hearing performance measures based on recipient input received from the recipient includes one or more of: obtaining one or more measures of hearing performance; obtaining one or more measures via in-situ audiometry; obtaining one or more measures of subjective hearing ability; obtaining one or more measures of listening effort; and/or obtaining one or more measures of conversational engagement.
19. The method of claim 18, wherein obtaining one or more measures of subjective hearing ability comprises: obtaining recipient input relating to subjective hearing ability via Ecological Momentary Assessment (EMA).
20. The method of claim 14, 15, or 16, further comprising: determining whether a change in acoustic hearing has occurred based on the correlating; and automatically providing the one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device responsive to determining that the change in acoustic hearing has occurred.
21. The method of claim 14, 15, or 16, further comprising: determining whether a change in device functionality has occurred based on the correlating; and automatically providing the one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device responsive to determining that the change in device functionality has occurred.
22. The method of claim 14, 15, or 16, further comprising: determining the one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating of the one or more evoked
potentials or impedances with one or more acoustic hearing thresholds and with the one or more hearing performance measures.
23. The method of claim 14, 15, or 16, wherein correlating the one or more evoked potentials or impedances with one or more acoustic hearing thresholds and with one or more hearing performance measures comprises: applying a machine-learned model to the one or more evoked potentials or impedances and the one or more hearing performance measures to infer changes to one or more of acoustic hearing thresholds and/or device deficiencies.
24. The method of claim 23, further comprising: using the machine-learned model to determine the one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device responsive to inferring the changes to the one or more acoustic hearing thresholds and/or the device deficiencies.
25. The method of claim 14, 15, or 16, further comprising: determining that a potential change in acoustic hearing has occurred based on correlating the one or more evoked potentials or impedances with one or more acoustic hearing thresholds and the one or more hearing performance measures.
26. The method of claim 25, further comprising: classifying the potential change in acoustic hearing using a machine-learned model.
27. The method of claim 14, 15, or 16, further comprising: determining that a potential change in device functionality has occurred based on correlating the one or more evoked potentials or impedances with one or more acoustic hearing thresholds and with the one or more hearing performance measures.
28. The method of claim 27, further comprising: classifying the potential change in device functionality using a machine-learned model.
29. The method of claim 14, 15, or 16, further comprising:
notifying the recipient, a clinician, a caregiver, or a designated medical professional regarding potential changes in acoustic hearing and/or device functionality based on the correlating.
30. One or more non-transitory computer readable storage media comprising instructions that, when executed by a processor, cause the processor to: obtain, from a hearing device, one or more electrophysiological measures of acoustic hearing associated with a recipient of the hearing device; obtain one or more hearing performance measures of the recipient; analyze the one or more electrophysiological measures with respect to the one or more hearing performance measures; and initiate at least one adjustment to operation of the hearing device based on the analyzing of the one or more electrophysiological measures with respect to the one or more hearing performance measures.
31. The one or more non-transitory computer readable storage media of claim 30, wherein the one or more electrophysiological measures include one or more evoked potentials.
32. The one or more non-transitory computer readable storage media of claim 30, wherein the one or more electrophysiological measures include one or more impedances.
33. The one or more non-transitory computer readable storage media of claim 30, 31, or 32, wherein the one or more hearing performance measures are obtained based on recipient- provided feedback.
34. The one or more non-transitory computer readable storage media of claim 30, 31, or 32, wherein the one or more hearing performance measures include one or more objective hearing performance measures.
35. The one or more non-transitory computer readable storage media of claim 30, 31, or 32, wherein the instructions that, when executed, cause the processor to analyze the one or more electrophysiological measures with respect to the one or more hearing performance measures includes instructions that cause the processor to:
execute a machine-learned model to detect a potential change in acoustic hearing of the recipient.
36. The one or more non-transitory computer readable storage media of claim 35, wherein the machine-learned model is configured to classify the potential change in acoustic hearing of the recipient.
37. The one or more non-transitory computer readable storage media of claim 30, 31, or 32, wherein the instructions that, when executed, cause the processor to analyze the one or more electrophysiological measures with respect to the one or more hearing performance measures includes instructions that cause the processor to: execute a machine-learned model to detect a potential change in operation of the hearing device.
38. The one or more non-transitory computer readable storage media of claim 37, wherein the machine-learned model is configured to classify the potential change in operation of the hearing device.
39. A hearing device system, comprising: one or more sensors; a memory storing computer-readable instructions; and a processor configured to execute the computer-readable instructions to: perform one or more electrophysiological tests to obtain one or more evoked potentials or one or more impedances from a recipient of the hearing device; correlate the one or more evoked potentials or the one or more impedances with one or more hearing performance measures; and automatically provide one or more electric-acoustic stimulation (EAS) programming adjustments for the hearing device based on the correlating.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/578,237 | 2023-08-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025041015A1 true WO2025041015A1 (en) | 2025-02-27 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11723572B2 (en) | Perception change-based adjustments in hearing prostheses | |
US8768477B2 (en) | Post-auricular muscle response based hearing prosthesis fitting | |
US11979719B2 (en) | Objective determination of acoustic prescriptions | |
US11786724B2 (en) | Recipient-directed electrode set selection | |
US10292644B2 (en) | Automated inner ear diagnoses | |
US20210260378A1 (en) | Sleep-linked adjustment methods for prostheses | |
EP3423150B1 (en) | Systems for using an evoked response to determine a behavioral audiogram value | |
WO2025041015A1 (en) | Electric-acoustic stimulation parameter adjustment | |
US20230372712A1 (en) | Self-fitting of prosthesis | |
US20240325733A1 (en) | Monitoring stimulating assembly insertion | |
WO2025010075A1 (en) | Systems and methods for identifying an evoked response measured through a cochlear implant | |
WO2024246666A1 (en) | Electrocochleography-based classification | |
WO2024023676A1 (en) | Techniques for providing stimulus for tinnitus therapy | |
WO2024194760A1 (en) | Electro-acoustic stimulation control | |
WO2023214254A1 (en) | Electrocochleography-based insertion monitoring | |
WO2024209308A1 (en) | Systems and methods for affecting dysfunction with stimulation |