[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020090763A1 - Processing device, system, processing method, and program - Google Patents

Processing device, system, processing method, and program Download PDF

Info

Publication number
WO2020090763A1
WO2020090763A1 PCT/JP2019/042240 JP2019042240W WO2020090763A1 WO 2020090763 A1 WO2020090763 A1 WO 2020090763A1 JP 2019042240 W JP2019042240 W JP 2019042240W WO 2020090763 A1 WO2020090763 A1 WO 2020090763A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound data
section
sound
value
indicating
Prior art date
Application number
PCT/JP2019/042240
Other languages
French (fr)
Japanese (ja)
Inventor
小林 透
隆真 亀谷
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to JP2020553908A priority Critical patent/JP7089650B2/en
Publication of WO2020090763A1 publication Critical patent/WO2020090763A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes

Definitions

  • the present invention relates to a processing device, a system, a processing method, and a program.
  • Patent Literature 1 describes a technique for calculating a body sound characteristic by using a power ratio between body sound in a first part of a living body and body sound in a second part, power of a body signal in a specific frequency band, and the like. Has been done.
  • Patent Document 2 describes that an exhalation sound is extracted from continuous breath sounds, and a value indicating the sound pressure of the exhalation sound is used to detect a suspected exhalation sound that is suspected as an abnormal exhalation sound. Further, Patent Document 2 describes that a breathing band sensor is attached so as to wind around a subject's chest, changes in chest expansion and contraction during a breathing motion are measured, and an exhalation sound is extracted using the measurement result. There is.
  • Patent Document 3 describes that an output signal of a sensor including a piezoelectric element is digitized and is filtered by a high-pass filter to be a respiratory airflow sound signal of a living body. Further, Patent Document 3 describes that the time period in which the inspiratory sound and the expiratory sound are generated is specified based on the magnitude of the amplitude of the respiratory airflow sound signal.
  • Patent Document 4 describes that breath sounds are extracted from body sounds using a bandpass filter. Further, it is described that the breathing section is estimated based on the power pattern of the breathing sound. A preset threshold value is used for estimating the breathing section.
  • the signal obtained by the sensor includes the effects of biological sounds other than breath sounds, and the effects also vary from person to person.
  • An example of the problem to be solved by the present invention is to provide a technique for calculating a breathing volume close to the volume perceived by human hearing from body sound data.
  • the invention according to claim 1 is An acquisition unit that acquires sound data including breath sounds, At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data.
  • the value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data
  • the section identifying unit is a processing device that determines the threshold value based on the sound data.
  • the invention described in claim 10 is The processing device according to claim 1, Equipped with a sensor,
  • the acquisition unit is a system that acquires the sound data indicating the sound detected by the sensor.
  • the invention according to claim 11 is An acquisition step of acquiring sound data including breath sounds, At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data.
  • the section identifying step is a processing method of determining the threshold value based on the sound data.
  • the invention according to claim 12 is A program that causes a computer to execute each step of the processing method according to claim 11.
  • FIG. 3 is a flowchart illustrating a processing method according to the first embodiment. It is the figure which displayed an example of sound data as an image. It is a figure which illustrates the structure of the processing apparatus and system which concern on 2nd Embodiment.
  • 9 is a flowchart illustrating a processing method executed by the processing device according to the second embodiment. It is a flow chart which illustrates the contents of processing of section specific step S130 in detail.
  • (A) to (d) is a figure for demonstrating the example of the processing content of area identification step S130 which concerns on 2nd Embodiment. It is a figure for explaining an example of processing contents of section specific step S130 concerning a 2nd embodiment.
  • (A) And (b) is a figure for explaining an example of processing contents of section specific step S130 concerning a 2nd embodiment. It is a figure which illustrates the computer for implement
  • FIG. It is a box and whisker plot which shows the relationship between the value which shows the volume calculated in the comparative example, and the result of a hearing evaluation.
  • 6 is a box and whisker plot showing the relationship between the value indicating the volume calculated in the example and the result of the hearing evaluation.
  • FIG. It is the figure which showed the relationship between the result of hearing evaluation and the value which shows the volume calculated by the comparative example with the histogram. It is the figure which showed the relationship between the result of hearing evaluation and the value which shows the sound volume calculated by the example with the histogram.
  • each component of the processing device 10 indicates a block of a functional unit rather than a configuration of a hardware unit unless otherwise specified.
  • Each component of the processing device 10 includes an arbitrary computer such as a CPU, a memory, a program loaded in the memory, a storage medium such as a hard disk storing the program, and a network connection interface. It is realized by combination. Then, there are various modified examples of the realizing method and the apparatus.
  • FIG. 1 is a diagram illustrating a configuration of a processing device 10 according to the first embodiment.
  • the processing device 10 includes an acquisition unit 110, a section identification unit 130, and a calculation unit 150.
  • the acquisition unit 110 acquires one or more sound data including breath sounds.
  • the section identifying unit 130 identifies at least one of the first section and the second section.
  • the first section is a section estimated to be breathing, and the second section is a section between the plurality of first sections.
  • the calculation unit 150 uses the first portion of the target sound data determined based on the first section and the second portion of the target sound data determined based on the second section of the target sound data. Volume information indicating the breath volume is calculated.
  • FIG. 2 is a flowchart illustrating the processing method according to the first embodiment.
  • the method includes an acquisition step S110, a section identification step S130, and a calculation step S150.
  • the acquisition step S110 one or more sound data including breath sounds are acquired.
  • the section specifying step S130 at least one of the first section and the second section is specified.
  • the first section is a section estimated to be breathing
  • the second section is a section between the plurality of first sections.
  • the calculation step S150 by using the first portion of the target sound data determined based on the first section and the second portion of the target sound data determined based on the second section, , Volume information indicating the breath volume is calculated.
  • This processing method can be executed by the processing device 10.
  • a method of calculating the breathing volume for example, there is a method of performing a filtering process for removing a specific frequency component from a biological signal and obtaining a signal power after extracting a breathing sound component.
  • the frequency band of the respiratory sound component and the frequency band of other body sound components overlap each other in the body sound detected at any part.
  • the respiratory sound component appeared in the band from 0 Hz to 1500 Hz
  • the pulsation and blood flow sound component appeared in the band from 0 Hz to 200 Hz.
  • the respiratory sound component appeared in the band from 0 Hz to 300 Hz
  • the heart sound component appeared in the band from 0 Hz to 500 Hz.
  • the acquisition unit 110 acquires sound data from, for example, a sensor attached to a living body.
  • the section identifying unit 130 identifies the first section and the second section.
  • the first section is a section in which it is estimated that the living body is inhaling or exhaling.
  • the second section is a section between the first section and the first section. More specifically, the second section is a section other than the first section. That is, the second section is a section in which it is estimated that breathing is not performed, and is a section in which it is estimated that apnea is temporarily made between breathing.
  • the second section does not necessarily have to be between the first section and the first section.
  • the end of the sound data may be specified as the second section.
  • FIG. 3 is a diagram showing an image of an example of sound data.
  • the time waveform of the sound data is shown in the display area 501
  • the spectrogram of the sound data is shown in the display area 502.
  • the horizontal axis represents time (time)
  • the vertical axis represents frequency
  • the intensity of each frequency component is represented by luminance.
  • the horizontal axis of the time waveform and the horizontal axis of the spectrogram are aligned.
  • the section identified as the first section is indicated by an arrow in the display area 502.
  • the section without the arrow is the second section. In this way, the section indicates a time range.
  • the sound data includes a plurality of first sections that are separated from each other and a plurality of second sections that are separated from each other.
  • the part estimated to be breathing contains a respiratory sound component and other body sound components
  • the part estimated to be apnea contains only other body sound components. Be done. Therefore, by comparing the data of the portion estimated to be breathing with the data of the other portion, it is possible to obtain the volume information in which the influence of other body sound components is reduced.
  • the volume information thus calculated has a high correlation with, for example, the volume of a breathing sound that a person feels when hearing with his or her hearing.
  • the use of such volume information makes it easier to detect, for example, an abnormality of a living body by data processing, and is useful for assisting diagnosis and monitoring the condition of a patient.
  • the calculation unit 150 includes the first portion of the target sound data that is determined based on the first section and the second portion of the target sound data that is determined based on the second section. Is used to calculate volume information indicating the respiratory volume of the target sound data. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
  • FIG. 4 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the second embodiment.
  • the processing device 10 according to the present embodiment has the configuration of the processing device 10 according to the first embodiment.
  • FIG. 5 is a flowchart illustrating a processing method executed by the processing device 10 according to the second embodiment.
  • the processing method according to this embodiment has the configuration of the processing method according to the first embodiment.
  • the system 20 includes a processing device 10 and a sensor 210. Then, the acquisition unit 110 acquires sound data indicating the sound detected by the sensor 210.
  • the sensor 210 detects body sounds including breath sounds.
  • the sensor 210 generates an electrical signal indicating the body sound and outputs it as sound data.
  • the sensor 210 is, for example, a microphone or a vibration sensor.
  • the vibration sensor is, for example, a displacement sensor, a speed sensor, or an acceleration sensor.
  • the microphone converts air vibrations caused by body sounds into electric signals.
  • the signal level value of this electric signal indicates the sound pressure of the vibration of the air.
  • the vibration sensor converts the vibration of the medium (for example, the body surface of the subject) caused by the body sound into an electric signal.
  • the signal level value of this electric signal directly or indirectly indicates the vibration displacement of the medium.
  • the vibration sensor when the vibration sensor includes a diaphragm, the vibration of the medium is transmitted to the diaphragm and the vibration of the diaphragm is converted into an electric signal.
  • the electric signal may be an analog signal or a digital signal.
  • the sensor 210 may be configured to include a circuit or the like that processes an electric signal. Examples of circuits that process electric signals include A / D conversion circuits and filter circuits. However, the A / D conversion and the like may be performed by the processing device 10.
  • the sound data is data indicating an electric signal, and is data indicating a signal level value based on the electric signal obtained by the sensor 210 in time series. That is, the sound data represents the waveform of a sound wave.
  • one sound data means a continuous sound data in time.
  • the sensor 210 is, for example, an electronic stethoscope.
  • the sensor 210 is pressed or attached to, for example, a part of the subject's living body where the body sound is to be measured by the measurer.
  • the acquisition unit 110 acquires only one continuous sound data will be described.
  • the acquisition unit 110 acquires, for example, sound data from the sensor 210 in the acquisition step S110.
  • the acquisition unit 110 can acquire the sound data detected by the sensor 210 in real time.
  • the acquisition unit 110 may read and acquire sound data that is measured by the sensor 210 in advance and is stored in the storage device.
  • the storage device may be provided inside the processing device 10 or may be provided outside the processing device 10.
  • the storage device provided inside the processing device 10 is, for example, the storage device 1080 of the computer 1000 described later.
  • the acquisition unit 110 may acquire sound data output from the sensor 210 and subjected to conversion processing or the like in the processing device 10 or a device other than the processing device 10. Examples of the conversion processing include amplification processing and A / D conversion processing.
  • the acquisition unit 110 continuously acquires sound data including body sounds from the sensor 210, for example. It should be noted that each signal level value of the sound data is associated with the recording time. The time may be associated with the sound data in the sensor 210, or when the sound data is acquired from the sensor 210 in real time, the acquisition unit 110 may associate the acquisition time of the sound data with the sound data.
  • the sound data for which the volume information is calculated is also referred to as target sound data.
  • the acquisition unit 110 acquires only one sound data, and thus the acquired sound data is the target sound data.
  • the acquisition unit 110 can continuously acquire the sound data while the subsequent step S120, the section identification step S130, and the calculation step S150 are performed. The following processing is performed on the acquired sound data in order from the beginning.
  • the processing device 10 further includes a filter processing unit 120.
  • step S120 the filter processing unit 120 performs the first filter processing on at least the target sound data.
  • the filter processing unit 120 performs bandpass filter processing in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
  • the first filter processing for example, Fourier transform is performed on the sound data to remove the band below f L1 [Hz] and the band above f H1 [Hz] in the frequency space. After that, the time-axis waveform is restored by the inverse Fourier transform.
  • the Fourier transform is, for example, a fast Fourier transform (FFT).
  • FFT fast Fourier transform
  • the first filter process is not limited to the above example, and may be a process using an FIR (Finite Impulse Response) filter or an IIR (Infinite Impulse Response) filter, for example.
  • the calculation unit 150 calculates the volume information using the first portion and the second portion of the target sound data after the first filter processing.
  • Noise included in the sound data can be removed by the first filtering process. Note that it is not necessary to extract only the respiratory sound component by the first filter processing.
  • the sound data after the first filter processing may include components other than respiratory sounds.
  • the section specifying unit 130 specifies at least one of the first section and the second section.
  • the section identifying unit 130 identifies at least one of the first section and the second section based on the sound data acquired by the acquiring section 110.
  • the section specifying unit 130 may specify at least one of the first section and the second section without using the sound data. A method of specifying the section by the section specifying unit 130 will be described later in detail.
  • the section specifying unit 130 generates at least one of first time information indicating the time range of the first section and second time information indicating the time range of the second section based on the result of specifying the section. For example, when the section specifying unit 130 specifies the first section, the section specifying unit 130 generates the first time information, and when the section specifying unit 130 specifies the second section, the section specifying unit 130 generates the second time information. To do. When the section specifying unit 130 specifies a plurality of discontinuous first sections, a plurality of pieces of first time information are generated. Further, when the section specifying unit 130 specifies a plurality of discontinuous second sections, a plurality of second time information is generated.
  • the calculation unit 150 calculates the volume information using the data in the first area among the target sound data acquired by the acquisition unit 110.
  • the first region is, for example, a region indicating the sound during the latest time T 1 when the calculation unit 150 performs this step.
  • T 1 is not particularly limited, it is, for example, 2 seconds or more and 30 seconds or less.
  • the calculation unit 150 uses at least one of the first time information and the second time information generated by the section specifying unit 130 to specify the first portion and the second portion in the first region of the target sound data. To do.
  • the calculating unit 150 sets the part of the time range indicated by the first time information in the first region of the target sound data as the first part, A part of the time range indicated by the second time information in the first area is referred to as a second part.
  • the calculation unit 150 sets the part of the time range indicated by the first time information in the first region as the first part, and the other parts of the first region. Let a part be a 2nd part.
  • the calculation unit 150 sets the portion of the time range indicated by the second time information in the first area as the second portion, and the portion of the first area The part other than is the first part.
  • the calculation unit 150 calculates the first signal strength that is the strength of the first portion of the target sound data and the second signal strength that is the strength of the second portion of the target sound data. Specifically, for example, the calculation unit 150 calculates the RMS (root mean square) of the first portion of the target sound data as the first signal strength. Further, the calculation unit 150 calculates the RMS of the second portion of the target sound data as the second signal strength. Note that the calculation unit 150 may calculate another index such as a peak-to-peak value as the signal strength instead of the RMS. However, the calculation method of the first signal strength and the calculation method of the second signal strength are the same.
  • the first signal strength is a value indicating the signal strength when, for example, all the first portions are regarded as one continuous signal.
  • the second signal strength is a value indicating the signal strength when, for example, all the second portions are regarded as one continuous signal.
  • the calculation unit 150 calculates the volume information of the target sound data using the first signal strength and the second signal strength.
  • the volume information does not have to be an absolute volume measured by another device or the like (for example, dB SPL, etc.).
  • the volume information may be at least a relative value with which the volume information obtained by the processing device 10 can be compared with each other.
  • the calculation unit 150 sets at least one of the information specifying the ratio of the first signal strength to the second signal strength and the information specifying the difference between the first signal strength and the second signal strength as the target sound data. It is calculated as the sound volume information. However, it is preferable that the calculator 150 calculates at least the information that specifies the ratio of the first signal strength to the second signal strength as the volume information. By doing so, the volume information can be expressed in dB as in the normal volume, and the information can be closer to the volume perceived by human hearing.
  • the information specifying the ratio of the first signal strength to the second signal strength is, for example, a value obtained by dividing the first signal strength by the second signal strength, a value obtained by dividing the second signal strength by the first signal strength, and these values. Is one of the values displayed in decibels.
  • the calculation unit 150 similarly calculates the volume information for each time T 2 .
  • T 2 is not particularly limited, but is, for example, 1 second or more and 10 seconds or less.
  • the volume information calculated by the calculation unit 150 is displayed on a display device, for example.
  • the calculation unit 150 may calculate a plurality of volume information of the target sound data in time series, and a graph showing the plurality of volume information in time series may be displayed on the display device.
  • the volume information may be displayed numerically.
  • the volume information calculated by the calculation unit 150 may be stored in the storage device or may be output to a device other than the processing device 10.
  • FIG. 6 is a flowchart showing in detail the processing contents of the section identifying step S130.
  • FIGS. 7A to 9B are diagrams for explaining an example of the processing content of the section identifying step S130 according to the present embodiment. 7A to 7D, 9A, and 9B, the horizontal axis represents the elapsed time from the reference time. An example of a method for the section specifying unit 130 to specify a section will be described in detail below with reference to FIGS. 6 to 9B.
  • the section specifying unit 130 specifies at least one of the first section and the second section using a threshold value for the value indicating the amplitude of the first sound data.
  • the value indicating the amplitude is a value indicating the magnitude of vibration of the first sound data at each time. Then, the section identifying unit 130 determines the threshold value based on the first sound data.
  • the sound data acquired by the acquisition unit 110 includes the first sound data and the target sound data.
  • the first sound data is sound data used for specifying a section
  • the target sound data is sound data for which volume information is calculated.
  • both the first sound data and the target sound data are this one sound data, and are the same at the time of acquisition by the acquisition unit 110.
  • the sound data acquired by the acquisition unit 110 is body sound data acquired by the neck. Then, the section can be specified more accurately. This is because the respiratory sound component can be detected at a high rate in the neck and its vicinity.
  • the section identifying unit 130 performs the second filtering process on the first sound data acquired by the acquiring unit 110 in step S131.
  • the second filter process for example, Fourier transform is performed on the sound data to remove the band of f L2 [Hz] or less and the band of f H2 [Hz] or more in the frequency space. After that, the time-axis waveform is restored by the inverse Fourier transform.
  • the Fourier transform is, for example, a fast Fourier transform (FFT).
  • FFT fast Fourier transform
  • the second filter processing is not limited to the above example, and may be processing by an FIR filter or an IIR filter, for example.
  • the section identifying unit 130 obtains a mode value, which will be described later, based on the first sound data that has been subjected to the second filtering process.
  • the second filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
  • f L2 [Hz] the cutoff frequency on the low frequency side
  • f H2 [Hz] the cutoff frequency on the high frequency side
  • 150 ⁇ f L2 ⁇ 250 holds
  • 550 ⁇ f H2 ⁇ 650 holds.
  • the respiratory sound component included in the first sound data can be mainly extracted. Note that it is not necessary to extract only the respiratory sound component by the second filter processing.
  • the first sound data after the second filter processing may include components other than breath sounds.
  • FIG. 7A is a diagram exemplifying the waveform of the first sound data at the time of acquisition by the acquisition unit 110
  • FIG. 7B is the waveform of FIG. 7A subjected to the second filter processing. It is a figure which illustrates the waveform after doing. As shown by the arrow in FIG. 7B, the respiratory sound component is mainly extracted by the second filter processing.
  • the first sound data acquired by the acquisition unit 110 is subjected to both the first filtering process of step S120 by the filtering unit 120 and the second filtering process of step S131 by the section identifying unit 130. You may be broken. However, if the pass band of the second filter process is narrower than the pass band of the first filter process, the presence or absence of the first filter process does not affect the section identification result.
  • step S132 the section identifying unit 130 calculates the absolute value of the data obtained in step S131. That is, each signal level value in the time series data is converted into an absolute value.
  • FIG. 7C is a diagram showing the result of calculating the absolute value of the data of FIG. 7B.
  • step S133 the section identifying unit 130 performs downsampling processing on the data obtained in step S132.
  • the contour of the data waveform is obtained as shown in FIG.
  • each data point of the data obtained by the downsampling process corresponds to a value indicating the amplitude at each time of the first sound data.
  • the section identifying unit 130 obtains a mode value described later based on the data obtained by performing at least the downsampling process on the first sound data in this way.
  • a portion estimated to be breathing is indicated by a downward arrow
  • a portion estimated to be not breathing is indicated by an upward arrow.
  • step S134 the section identifying unit 130 determines whether to update (determine) the threshold value based on a predetermined update condition. Specifically, for example, when the section specifying unit 130 has never determined the threshold value for the first sound data, the section specifying unit 130 determines that the update condition is satisfied. On the other hand, when the section identifying unit 130 determines the threshold value for the first sound data at least once, the section identifying unit 130 determines that the update condition is not satisfied.
  • step S134 When it is determined that the update condition is satisfied (Yes in step S134), the section identifying unit 130 proceeds to step S135 and performs a process for determining a threshold value. On the other hand, when it is determined that the update condition is not satisfied (No in step S134), the section identifying unit 130 performs step S137 using the threshold value that has already been set.
  • the section identifying unit 130 obtains the mode value of the values indicating the amplitude in the first data. Specifically, in step S135, the section identifying unit 130 obtains the mode value of the values indicating the amplitude of the first sound data. Therefore, the section identifying unit 130 counts the number of times of appearance of a value indicating each amplitude in the predetermined time range in the first sound data. Then, the section identifying unit 130 obtains, as the mode value, the value indicating the amplitude having the largest number of appearances.
  • the predetermined time range may be, for example, a range from when the acquisition unit 110 starts acquiring the first sound data to when the section identification unit 130 performs this step, or the section identification unit 130 performs the present step. It may be the latest time T 3 at the time of performing. T 3 is not particularly limited, but is, for example, 2 seconds or more and 30 seconds or less. Further, for example, T 3 may be the same as T 1 . When the first sound data is the target sound data, the area in the predetermined time range and the first area may be matched.
  • the acquisition unit 110 may read out and acquire the sound data stored in the storage device.
  • the section identifying unit 130 may identify the mode value and the threshold value using the entire sound data.
  • FIG. 8 is a histogram illustrating the number of appearances of each amplitude. Such a histogram corresponds to a graph of the first sound data in which the horizontal axis represents the amplitude and the vertical axis represents the number of appearances.
  • the value indicating the amplitude is not limited to the value obtained by the above processing, and may be, for example, a peak-to-peak value or a standardized value.
  • the section identifying unit 130 sets a threshold value larger than the mode value. Specifically, in the histogram, the value indicating the amplitude closest to the mode value is set as the threshold value among the values indicating one or more amplitudes having the minimum value. The most frequent value in the histogram is the value showing the smallest amplitude among the plurality of values showing the maximum value. In the graph of FIG. 8, a point 505 having the maximum number of appearances and a plurality of minimum values are circled. In the example of this figure, the value indicating the amplitude at this point 505 is the mode value. Then, a value indicating the amplitude having the minimum value 506 on the lowest amplitude side among the minimum values is determined as the threshold value.
  • the mode value is considered to be a value indicating the amplitude mainly corresponding to the apnea section. Therefore, by determining the threshold value in this manner, a threshold value capable of distinguishing the apnea section from the other sections can be obtained. Further, since this threshold value is obtained using the sound data acquired by the acquisition unit 110, highly accurate section identification is realized regardless of individual differences in body sounds.
  • the threshold value is not limited to the above example.
  • the threshold may be a value indicating the amplitude having the second or third minimum value from the low amplitude side in the histogram, or may be a value indicating the amplitude having the maximum value next to the point 505.
  • the section identifying unit 130 performs at least one of the following first process and second process in step S137.
  • the first process is a process of identifying, as a first section, a section in which at least a value indicating the amplitude exceeds a threshold value in the first sound data.
  • the second process is a process of identifying, as the second section, a section in which at least the value indicating the amplitude is less than the threshold value in the first sound data.
  • the section that matches the threshold may be included in the first section or may be included in the second section.
  • the section identifying unit 130 applies a threshold value to the first sound data that has been subjected to the process of step S133, and one section in which the value indicating the amplitude continuously exceeds the threshold value is one first section. And on the other hand, a section in which the value indicating the amplitude is continuously less than the threshold value is defined as one second section.
  • the section specifying unit 130 may specify only one of the first section and the second section. In that case, the remaining section becomes the other section.
  • FIG. 9A points below the threshold of the graph of FIG. 7D are circled.
  • FIG. 9B is an enlarged view of the small amplitude side in FIG. 9A. Further, in FIG. 9B, a straight line indicating the threshold value is attached.
  • the section specifying unit 130 specifies a section for the newly acquired sound data, for example, every time T 4 .
  • T 4 is not particularly limited, but is, for example, 1 second or more and 10 seconds or less. Note that T 4 is preferably T 2 or less as described above.
  • section identifying unit 130 may determine the threshold each time the section is identified. In addition, the section identifying unit 130 may identify the section each time the calculating unit 150 calculates the volume information. For example, the sound data in the same time range may be processed, and the threshold value may be determined, the section may be specified, and the volume information may be calculated.
  • the section specifying unit 130 may specify the section by another method. For example, a section may be similarly determined using a predetermined threshold value.
  • the section specifying unit 130 may specify the section based on the output of the band sensor attached to the chest or the like of the target person. For example, the band sensor can detect a bulge and movement of the chest during breathing.
  • Each functional configuration unit of the processing device 10 may be implemented by hardware that implements each functional configuration unit (eg, hard-wired electronic circuit, etc.), or a combination of hardware and software (eg, electronic Combination of a circuit and a program for controlling the circuit).
  • each functional component of the processing device 10 is realized by a combination of hardware and software will be further described.
  • FIG. 10 is a diagram exemplifying a computer 1000 for realizing the processing device 10.
  • the computer 1000 is an arbitrary computer.
  • the computer 1000 is an SoC (System On Chip), a Personal Computer (PC), a server machine, a tablet terminal, a smartphone, or the like.
  • the computer 1000 may be a dedicated computer designed to realize the processing device 10 or a general-purpose computer.
  • the computer 1000 has a bus 1020, a processor 1040, a memory 1060, a storage device 1080, an input / output interface 1100, and a network interface 1120.
  • the bus 1020 is a data transmission path for the processor 1040, the memory 1060, the storage device 1080, the input / output interface 1100, and the network interface 1120 to exchange data with each other.
  • the processor 1040 is various processors such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an FPGA (Field-Programmable Gate Array).
  • the memory 1060 is a main storage device realized by using a RAM (Random Access Memory) or the like.
  • the storage device 1080 is an auxiliary storage device realized by using a hard disk, SSD (Solid State Drive), memory card, ROM (Read Only Memory), or the like.
  • the input / output interface 1100 is an interface for connecting the computer 1000 and input / output devices.
  • the input / output interface 1100 is connected with an input device such as a keyboard and a mouse and an output device such as a display device.
  • the input / output interface 1100 may be connected with a touch panel or the like that doubles as a display device and an input device.
  • the network interface 1120 is an interface for connecting the computer 1000 to the network.
  • This communication network is, for example, LAN (Local Area Network) or WAN (Wide Area Network).
  • the method by which the network interface 1120 connects to the network may be a wireless connection or a wired connection.
  • the storage device 1080 stores a program module that realizes each functional component of the processing device 10.
  • the processor 1040 realizes the function corresponding to each program module by reading each of these program modules into the memory 1060 and executing them.
  • the sensor 210 is connected to, for example, the input / output interface 1100 of the computer 1000 or the network interface 1120 via a network.
  • the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
  • FIG. 11 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the third embodiment.
  • the processing apparatus 10 and the system 20 according to this embodiment are the same as the processing apparatus 10 and the system 20 according to the second embodiment, respectively, except for the points described below.
  • the system 20 includes a plurality of sensors 210, and the acquisition unit 110 according to the present embodiment acquires a plurality of sound data indicating sounds detected by the plurality of sensors 210.
  • the plurality of sensors 210 detect body sounds in a plurality of parts of the living body of the same person, for example. Then, the acquisition unit 110 can acquire the sound data of the body sounds simultaneously detected in a plurality of parts of the living body of the same person. Further, the acquisition unit 110 can acquire a plurality of sound data in which at least some recording times overlap each other.
  • the acquisition unit 110 may further acquire sound data of body sounds of a plurality of persons, but at least the processing by the section identification unit 130 and the calculation unit 150 is performed for each person.
  • the acquisition unit 110 may acquire a plurality of sound data whose recording times do not overlap each other, but at least the processing by the section identification unit 130 and the calculation unit 150 is performed for each sound data or at least a part of the recorded sound data. It is performed for each of a plurality of pieces of sound data whose times overlap with each other.
  • FIG. 12 is a diagram illustrating the attachment positions of the plurality of sensors 210.
  • the sensor 210 is attached from the part A to the part D.
  • the processing device 10 receives the input of information indicating the attachment site of the sensor 210 by the user prior to the acquisition of the sound data.
  • a diagram showing the living body is displayed on the display device together with the candidates for the attachment site of the sensor 210, and the user specifies the attachment position of each sensor 210 among the candidates using an input device such as a mouse, a keyboard, or a touch panel. To do.
  • the sound data acquired by each sensor 210 is associated with information indicating a part.
  • the section identifying unit 130 identifies at least one of the first section and the second section based on the first sound data, as described in the second embodiment. That is, the target sound data and the first sound data are included in the plurality of sound data acquired by the acquisition unit 110.
  • the target sound data is not limited to one.
  • the target sound data includes at least second sound data different from the first sound data will be described below. That is, the volume information of the second sound data is calculated based on the section specified by the first sound data. There may be a plurality of second sound data.
  • the first sound data indicates the sound detected by the first sensor 210 provided at the first position on the surface of or inside the human body.
  • the second sound data indicates the sound detected by the second sensor 210 provided at the second position on the surface of or inside the human body.
  • the first position is located on the neck, or that the first position is closer to the neck than the second position.
  • the section can be specified more accurately.
  • the sound data obtained at the part A is the first sound data.
  • the section identifying unit 130 selects which of the plurality of sound data acquired by the acquisition unit 110 is to be the first sound data based on the information indicating the part associated with each sound data.
  • the acquisition step S110 and step S120 according to the present embodiment are performed by the acquisition unit 110 and the filter processing unit 120, respectively, similarly to the second embodiment.
  • the section specifying unit 130 specifies at least one of the first section and the second section in the first sound data, as in the second embodiment. Then, based on the first sound data, at least one of the first time information indicating the time range of the first section and the second time information indicating the time range of the second section is generated.
  • the calculation unit 150 uses at least one of the first time information and the second time information in the calculation step S150, and the first part of the sound data of each target. And the second part are specified. By doing so, also in the second sound data included in the target sound data, it is possible to specify the first section in which it is estimated that breathing is performed and the second section other than that.
  • the calculation unit 150 calculates volume information for each target sound data, as in the second embodiment. By doing so, volume information for each part can be obtained. Note that the calculation unit 150 may set all the sound data acquired by the acquisition unit 110 as the target sound data, or may set only the sound data corresponding to the part designated by the user as the target sound data.
  • FIG. 13 is a diagram showing a display example of volume information of a plurality of parts.
  • the volume information of a plurality of parts is displayed in a state in which the correspondence with each part can be understood.
  • a numerical value indicating the volume is displayed based on the volume information in the map indicating the region.
  • the numerical value indicating the volume is displayed in a graph in time series.
  • the horizontal axis represents the elapsed time from the reference time, respectively, and is synchronized among a plurality of parts. It should be noted that the scale of the axes of the graph may be enlarged or reduced by the user or may be moved in parallel as necessary.
  • target sound data may further include the first sound data.
  • target sound data may include only the first sound data.
  • the volume information of the first sound data can be calculated in the same manner as above.
  • the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
  • the processing device 10 and the system 20 according to the fourth embodiment are the same as the processing device 10 and the system 20 according to the third embodiment, except for the processing contents of the section identifying unit 130 and the calculation unit 150, respectively.
  • the acquisition unit 110 acquires a plurality of sound data indicating sounds detected by a plurality of sensors 210. Then, the section identifying unit 130 identifies the first section and the second section in each of the two or more pieces of sound data among the plurality of pieces of acquired sound data. That is, in the present embodiment, the section identifying unit 130 uses two or more sound data sets among the plurality of sound data sets acquired by the acquisition unit 110 as the first sound data set. By specifying the section using the two or more first sound data, the accuracy of specifying the section can be improved.
  • the section specifying unit 130 sets the threshold for each of the first sound data that specifies the section. decide.
  • the section identifying unit 130 sets the time range that is the first section in all the first sound data that specifies the section, or the time range that is the second section in all the first sound data that specifies the section.
  • the 3rd time information shown is generated.
  • the two or more first sound data include sound data detected in the neck or sound data detected at a position closest to the neck among a plurality of sound data. Then, the section can be specified more accurately.
  • the calculation unit 150 identifies the first part and the second part of the sound data of each target using the third time information. Specifically, when the third time information is information indicating the time range defined as the first section in all the first sound data, the calculation unit 150 indicates the third time information in the target sound data. The portion of the time range is the first portion, and the other portions are the second portions. On the other hand, when the third time information is the information indicating the time range defined as the second section in all the first sound data, the calculation unit 150 sets the time range indicated by the third time information in the target sound data. The portion is the second portion, and the other portions are the first portions.
  • the target sound data may or may not include the first sound data. Further, the target sound data may or may not include second sound data different from the first sound data.
  • the calculation unit 150 calculates volume information for each target sound data, as in the second embodiment. By doing so, volume information for each part can be obtained.
  • the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
  • the section identifying unit 130 identifies the first section and the second section in each of the two or more sound data. Therefore, the accuracy of section identification can be improved.
  • FIG. 14 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the fifth embodiment.
  • the processing apparatus 10 and the system 20 according to this embodiment are the same as at least one of the second to fourth embodiments except for the points described below.
  • the processing device 10 further includes an estimation unit 170.
  • the estimation unit 170 estimates the state of the person in which the body sound is detected, based on the volume information calculated by the calculation unit 150. The details will be described below.
  • the calculation unit 150 calculates volume information of a plurality of target sound data in the same manner as at least one of the second to fourth embodiments.
  • 15 and 16 are diagrams showing display examples of volume information of a plurality of parts, respectively.
  • the estimation unit 170 acquires the calculated volume information from the calculation unit 150. Information indicating a part is associated with each volume information. The estimation unit 170 calculates, for example, the rate of decrease in the volume indicated by the volume information of each part. Then, if the rate of decrease exceeds a predetermined reference value, it is estimated that breathing is weakened. In that case, the estimation unit 170 displays or notifies that the breathing is weakened, as shown in FIG. 15, for example. Note that the estimation unit 170 may estimate that the breathing is weakened when the volume decrease rate becomes high in a predetermined number or more of sites. In addition, the estimation unit 170 may estimate that the breathing is weakened when the decrease rate of the sound volume becomes high over a predetermined length.
  • the estimation unit 170 calculates, for example, a difference between two sound data indicating body sounds detected at positions symmetrical to each other in the living body. Then, the estimation unit 170 estimates that there is a suspicion of pneumothorax when the magnitude of the difference exceeds a predetermined reference value. Also, based on whether the difference is positive or negative, which lung is suspected to be pneumothorax is estimated. In this way, the estimation unit 170 may estimate the position of the sound source of the abnormal breath sound based on the calculated plurality of volume information. Then, the estimation unit 170 displays or notifies that there is a suspicion of pneumothorax and the estimated position, as shown in FIG. 16, for example. Note that the estimation unit 170 may estimate that the breathing is weakened when the magnitude of the difference exceeds the reference value over the predetermined length.
  • the processing device 10 according to the present embodiment can also be realized by using the computer 1000 as shown in FIG.
  • the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
  • the processing device 10 includes the estimation unit 170 that estimates the state of the person in which the body sound is detected, based on the volume information calculated by the calculation unit 150. Therefore, the condition of the patient or the like can be monitored.
  • Sound data was obtained by measuring body sounds of 4 subjects (neck, upper right chest, upper left chest, and lower right chest) with 22 subjects. Then, the obtained sound data was reproduced, and the audibility evaluation whether or not the breathing sound was heard was performed. In the auditory perception, “0” was given when no breathing sound was heard, “1” was barely heard, and “2" was heard well.
  • the sound data was processed by each method of the example and the comparative example, and the value indicating the sound volume was calculated. Then, the calculated value was compared with the result of hearing evaluation.
  • the value indicating the volume was calculated as described in the second embodiment. Specifically, a threshold value was set based on each sound data, and the section was specified. Further, the RMS of each section of the sound data subjected to the first filter processing in which f L1 was 100 Hz and f H1 was 1000 Hz was calculated. Then, a value obtained by decibel-displaying the value obtained by dividing the RMS in the first section by the RMS in the second section was used as the value indicating the volume. That is, the RMS in the second section was set to 0 dB. Note that the determination of the threshold value, the specification of the section, and the calculation of the value indicating the sound volume were performed independently for each sound data.
  • FIGS. 17 and 18 are box-and-whisker diagrams showing the relationship between the value indicating the volume calculated in the comparative example and the example and the result of the hearing evaluation.
  • FIG. 19 is a diagram showing a histogram of the relationship between the result of the hearing evaluation and the value indicating the volume calculated in the comparative example.
  • FIG. 20 is a histogram showing the relationship between the result of the hearing evaluation and the value indicating the volume calculated in the embodiment.
  • the magnitude of the value indicating the volume is well correlated with the evaluation 0, the evaluation 1, and the evaluation 2, and the sound data of the evaluation 0 and the sound of the evaluation 1 are obtained.
  • the data could be clearly identified by the value indicating the volume.
  • a value having a high correlation with the result of the auditory evaluation could be calculated as the volume. Therefore, it was confirmed that the breathing volume close to the volume perceived by human hearing can be calculated from the sound data by the method of the embodiment.
  • An acquisition unit that acquires one or more sound data including breath sounds, A section specifying unit that specifies at least one of a first section presumed to be breathing and a second section between the plurality of first sections; Using the first portion of the target sound data that is determined based on the first section and the second portion of the target sound data that is determined based on the second section, the target sound data And a calculation unit that calculates volume information indicating a respiratory volume. 1-2. 1-1.
  • a processing device for calculating the volume information of the target sound data using the first signal strength and the second signal strength is a processing apparatus which calculates the information which specifies the ratio of the said 1st signal strength to the said 2nd signal strength as the said volume information of the said sound data of object. 1-4. 1-1. From 1-3.
  • the processing device calculates the volume information using the first portion and the second portion of the target sound data after the filtering,
  • the filter processing unit performs bandpass filter processing in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
  • 50 ⁇ f L1 ⁇ 150 holds, A processing device satisfying 500 ⁇ f H1 ⁇ 1500. 1-5. 1-1. To 1-4.
  • the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
  • the section identifying unit generates at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section based on the first sound data
  • the calculation unit uses at least one of the first time information and the second time information to specify the first portion and the second portion of the target sound data
  • the processing device in which the target sound data includes at least second sound data different from the first sound data. 1-6. 1-5.
  • the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
  • the second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body
  • the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
  • the section specifying unit Specifying the first section and the second section in each of two or more of the sound data, Generating third time information indicating a time range defined as the first section in all of the two or more sound data, or a time range defined as the second section in all of the two or more sound data
  • the said calculation part is a processing apparatus which specifies the said 1st part and said 2nd part of the sound data of the said object using the said 3rd time information. 1-8. 1-5. To 1-7.
  • the processing device calculates the volume information of a plurality of target sound data
  • the processing device further comprising an estimation unit that estimates the position of the sound source of the abnormal respiratory sound based on the calculated plurality of volume information. 1-9. 1-1.
  • the said acquisition part is a system which acquires the said sound data which show the sound detected by the said sensor. 2-1.
  • a calculation step of calculating volume information indicating the breath volume.
  • the processing method described in the calculating step the processing method of calculating information specifying the ratio of the first signal strength to the second signal strength as the volume information of the target sound data. 2-4. 2-1. From 2-3.
  • the processing method described in any one of Further comprising a filtering step for performing a filtering process on at least the target sound data In the calculating step, the volume information is calculated using the first portion and the second portion of the target sound data after the filtering process, In the filtering step, bandpass filter processing is performed in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
  • the processing method described in any one of a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
  • the section specifying step at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section is generated based on the first sound data,
  • the calculating step at least one of the first time information and the second time information is used to identify the first portion and the second portion of the target sound data,
  • the processing method, wherein the target sound data includes at least second sound data different from the first sound data. 2-6. 2-5.
  • the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
  • the second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body
  • a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
  • the section specifying step Specifying the first section and the second section in each of two or more of the sound data, Generating third time information indicating a time range defined as the first section in all of the two or more sound data, or a time range defined as the second section in all of the two or more sound data,
  • the calculating step a processing method of identifying the first portion and the second portion of the target sound data by using the third time information. 2-8. 2-5. To 2-7.
  • the volume information of a plurality of target sound data is calculated, The processing method further comprising an estimation step of estimating the position of the sound source of the abnormal respiratory sound based on the calculated plurality of volume information.
  • An acquisition unit that acquires sound data including breath sounds, At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data.
  • the value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data
  • the said area specific part is a processing apparatus which determines the said threshold value based on the said sound data.
  • In the processing device described in The section specifying unit In the sound data, find the mode of the value indicating the amplitude, Defining the threshold value greater than the mode value, Of the sound data, at least a section in which a value indicating the amplitude exceeds the threshold is specified as the first section, and at least a section in which the value indicating the amplitude is less than the threshold is A processing device that performs at least one of the second processes specified as two sections. 4-3. 4-2.
  • the mode value A processing device that is a value indicating the amplitude that is closest to. 4-4. 4-3.
  • the mode value is a value indicating the smallest amplitude among a plurality of values indicating the amplitude having a maximum value. 4-5. 4-2. To 4-4.
  • the filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
  • 150 ⁇ f L2 ⁇ 250 holds,
  • In the processing device according to any one of The section specifying unit is a processing device that obtains the mode value based on data obtained by performing at least downsampling processing on the sound data. 4-7. 4-1. To 4-6.
  • the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
  • the section specifying unit Specifying at least one of the first section and the second section in the first sound data included in the plurality of sound data, Generating at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section, Of the sound data included in the plurality of sound data, the first section and the second section of the second sound data different from the first sound data are set to the first time information and the second section.
  • a processing device that identifies based on at least one of time information. 4-8. 4-7.
  • the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
  • the second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body
  • the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
  • the section specifying unit Specifying the first section and the second section in each of two or more of the sound data
  • a process of generating third time information indicating a time range specified as the first section in all of the two or more sound data, or a time range specified as the second section in all of the two or more sound data.
  • a processing device according to any one of, Equipped with a sensor, The said acquisition part is a system which acquires the said sound data which show the sound detected by the said sensor. 5-1.
  • the value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
  • the processing method of determining the threshold value based on the sound data 5-2. 5-1.
  • the processing method described in the section specifying step In the sound data, find the mode of the value indicating the amplitude, Defining the threshold value greater than the mode value, Of the sound data, at least a section in which a value indicating the amplitude exceeds the threshold is specified as the first section, and at least a section in which the value indicating the amplitude is less than the threshold is A processing method for performing at least one of the second processes specified as two sections. 5-3. 5-2.
  • the mode value when the sound data is represented in a graph in which the horizontal axis indicates the amplitude and the vertical axis indicates the number of appearances, among the values indicating one or more of the minimum values, the mode value A processing method that is a value that is closest to the amplitude. 5-4. 5-3.
  • the mode value is a value indicating the smallest amplitude among a plurality of values indicating the amplitude having a maximum value. 5-5. 5-2. To 5-4.
  • the mode value is obtained based on the sound data after performing the filtering process
  • the filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
  • 150 ⁇ f L2 ⁇ 250 holds, A processing method that satisfies 550 ⁇ f H2 ⁇ 650. 5-6. 5-2. To 5-5.
  • a processing method for obtaining the mode value based on data obtained by performing at least downsampling processing on the sound data 5-7. 5-1. To 5-6.
  • a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired
  • the section specifying step Specifying at least one of the first section and the second section in the first sound data included in the plurality of sound data, Generating at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section, Of the sound data included in the plurality of sound data, the first section and the second section of the second sound data different from the first sound data are set to the first time information and the second section.
  • the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
  • the second sound data indicates a sound detected by a second sensor provided at a second position on the surface of or inside the human body
  • the said 1st position is located in a neck, or the said 1st position is a processing method closer to a neck than the said 2nd position. 5-9. 5-1. To 5-6.
  • a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
  • the section specifying step Specifying the first section and the second section in each of two or more of the sound data, A process of generating third time information indicating a time range specified as the first section in all of the two or more sound data, or a time range specified as the second section in all of the two or more sound data Method. 6-1. 5-1. To 5-9. A program that causes a computer to execute each step of the processing method described in any one of 1.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Acoustics & Sound (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Physiology (AREA)
  • Pulmonology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

This processing device is provided with an acquisition unit and a section specification unit. The acquisition unit acquires sound data including respiratory sound. The section specification unit specifies a first section and/or a second section using threshold values for the values indicating amplitudes of sound data. The first section is a section in which respiration is estimated to be taking place, while the second section is a section between a plurality of the first sections. The values showing amplitudes indicate the magnitudes of vibrations in the sound data at the respective times. In addition, the section specification unit determines the threshold values on the basis of the sound data.

Description

処理装置、システム、処理方法、およびプログラムProcessing device, system, processing method, and program
 本発明は、処理装置、システム、処理方法、およびプログラムに関する。 The present invention relates to a processing device, a system, a processing method, and a program.
 医学分野において、患者の呼吸音量を測定することは重要である。たとえば、呼吸音量の減弱を捉えられれば、気胸等、異常の早期発見に役立つ。また、近年、電子聴診器等のセンサを用いて得た信号を解析し、呼吸状態等を推定する方法も提案されている。 In the medical field, it is important to measure the respiratory volume of patients. For example, if the reduction in respiratory volume is captured, it is useful for early detection of abnormalities such as pneumothorax. Further, in recent years, a method of analyzing a signal obtained by using a sensor such as an electronic stethoscope and estimating a respiratory state has been proposed.
 特許文献1には、生体の第一部位における生体音と第二部位における生体音とのパワ比や、特定の周波数帯における生体信号のパワ等を用いて、生体音特性を算出する技術が記載されている。 Patent Literature 1 describes a technique for calculating a body sound characteristic by using a power ratio between body sound in a first part of a living body and body sound in a second part, power of a body signal in a specific frequency band, and the like. Has been done.
 特許文献2には、連続した呼吸音から呼気音を抽出し、呼気音の音圧を表す値等を用いて異常呼気音と疑わしき疑呼気音を検出することが記載されている。また特許文献2には、呼吸バンドセンサを被験者の胸部を巻くように取り付け、呼吸動作における胸部の膨張と収縮の変化を測定し、その測定結果を用いて呼気音を抽出することが記載されている。 Patent Document 2 describes that an exhalation sound is extracted from continuous breath sounds, and a value indicating the sound pressure of the exhalation sound is used to detect a suspected exhalation sound that is suspected as an abnormal exhalation sound. Further, Patent Document 2 describes that a breathing band sensor is attached so as to wind around a subject's chest, changes in chest expansion and contraction during a breathing motion are measured, and an exhalation sound is extracted using the measurement result. There is.
 特許文献3には、圧電素子を備えるセンサの出力信号をデジタル化し、高域通過フィルタによってフィルタリングして生体の呼吸気流音信号とすることが記載されている。また、特許文献3には、呼吸気流音信号の振幅の大きさに基づいて吸気音および呼気音が発生している時間帯を特定することが記載されている。 Patent Document 3 describes that an output signal of a sensor including a piezoelectric element is digitized and is filtered by a high-pass filter to be a respiratory airflow sound signal of a living body. Further, Patent Document 3 describes that the time period in which the inspiratory sound and the expiratory sound are generated is specified based on the magnitude of the amplitude of the respiratory airflow sound signal.
 特許文献4には、バンドパスフィルタを用いて生体音から呼吸音を抽出することが記載されている。また、呼吸音のパワーパターンに基づいて呼吸区間を推定することが記載されている。なお、呼吸区間の推定には予め設定した閾値が用いられる。 Patent Document 4 describes that breath sounds are extracted from body sounds using a bandpass filter. Further, it is described that the breathing section is estimated based on the power pattern of the breathing sound. A preset threshold value is used for estimating the breathing section.
国際公開第2012/060107号International Publication No. 2012/060107 特開2012-205693号公報JP 2012-205693 A 特開2017-169647号公報JP, 2017-169647, A 国際公開第2014/103107号International Publication No. 2014/103107
 しかし、センサで得られる信号には、呼吸音以外の生体音等の影響が含まれ、その影響には個人差もあるため、解析により正確な呼吸音量を得ることは容易ではなかった。 However, it was not easy to obtain an accurate breathing volume by analysis because the signal obtained by the sensor includes the effects of biological sounds other than breath sounds, and the effects also vary from person to person.
 本発明が解決しようとする課題としては、生体音のデータから、人の聴覚で感じる音量と近い呼吸音量を算出する技術を提供することが一例として挙げられる。 An example of the problem to be solved by the present invention is to provide a technique for calculating a breathing volume close to the volume perceived by human hearing from body sound data.
 請求項1に記載の発明は、
 呼吸音を含む音データを取得する取得部と、
 前記音データにおける呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を、前記音データの振幅を示す値についての閾値を用いて特定する区間特定部とを備え、
 前記振幅を示す値は前記音データの各時刻の振動の大きさを示す値であり、
 前記区間特定部は、前記閾値を前記音データに基づいて決定する処理装置である。
The invention according to claim 1 is
An acquisition unit that acquires sound data including breath sounds,
At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data. With a section specifying unit to specify,
The value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
The section identifying unit is a processing device that determines the threshold value based on the sound data.
 請求項10に記載の発明は、
 請求項1に記載の処理装置と、
 センサとを備え、
 前記取得部は、前記センサで検出された音を示す前記音データを取得するシステムである。
The invention described in claim 10 is
The processing device according to claim 1,
Equipped with a sensor,
The acquisition unit is a system that acquires the sound data indicating the sound detected by the sensor.
 請求項11に記載の発明は、
 呼吸音を含む音データを取得する取得ステップと、
 前記音データにおける呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を、前記音データの振幅を示す値についての閾値を用いて特定する区間特定ステップとを含み、
 前記振幅を示す値は前記音データの各時刻の振動の大きさを示す値であり、
 前記区間特定ステップでは、前記閾値を前記音データに基づいて決定する処理方法である。
The invention according to claim 11 is
An acquisition step of acquiring sound data including breath sounds,
At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data. Including a section specifying step to specify,
The value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
The section identifying step is a processing method of determining the threshold value based on the sound data.
 請求項12に記載の発明は、
 請求項11に記載の処理方法の各ステップをコンピュータに実行させるプログラムである。
The invention according to claim 12 is
A program that causes a computer to execute each step of the processing method according to claim 11.
 上述した目的、およびその他の目的、特徴および利点は、以下に述べる好適な実施の形態、およびそれに付随する以下の図面によってさらに明らかになる。 The above-described object, other objects, features and advantages will be further clarified by the preferred embodiment described below and the following drawings accompanying it.
第1の実施形態に係る処理装置の構成を例示する図である。It is a figure which illustrates the structure of the processing apparatus which concerns on 1st Embodiment. 第1の実施形態に係る処理方法を例示するフローチャートである。3 is a flowchart illustrating a processing method according to the first embodiment. 音データの一例を画像表示した図である。It is the figure which displayed an example of sound data as an image. 第2の実施形態に係る処理装置およびシステムの構成を例示する図である。It is a figure which illustrates the structure of the processing apparatus and system which concern on 2nd Embodiment. 第2の実施形態に係る処理装置で実行される処理方法を例示するフローチャートである。9 is a flowchart illustrating a processing method executed by the processing device according to the second embodiment. 区間特定ステップS130の処理内容を詳しく例示するフローチャートである。It is a flow chart which illustrates the contents of processing of section specific step S130 in detail. (a)から(d)は、第2の実施形態に係る、区間特定ステップS130の処理内容の例を説明するための図である。(A) to (d) is a figure for demonstrating the example of the processing content of area identification step S130 which concerns on 2nd Embodiment. 第2の実施形態に係る、区間特定ステップS130の処理内容の例を説明するための図である。It is a figure for explaining an example of processing contents of section specific step S130 concerning a 2nd embodiment. (a)および(b)は、第2の実施形態に係る、区間特定ステップS130の処理内容の例を説明するための図である。(A) And (b) is a figure for explaining an example of processing contents of section specific step S130 concerning a 2nd embodiment. 処理装置を実現するための計算機を例示する図である。It is a figure which illustrates the computer for implement | achieving a processing apparatus. 第3の実施形態に係る処理装置およびシステムの構成を例示する図である。It is a figure which illustrates the structure of the processing apparatus and system which concern on 3rd Embodiment. 複数のセンサの取り付け位置を例示する図である。It is a figure which illustrates the attachment position of a some sensor. 複数の部位の音量情報の表示例を示す図である。It is a figure which shows the example of a display of the volume information of several parts. 第5の実施形態に係る処理装置およびシステムの構成を例示する図である。It is a figure which illustrates the structure of the processing apparatus and system which concern on 5th Embodiment. 複数の部位の音量情報の表示例を示す図である。It is a figure which shows the example of a display of the volume information of several parts. 複数の部位の音量情報の表示例を示す図である。It is a figure which shows the example of a display of the volume information of several parts. 比較例で算出された音量を示す値と、聴覚評価の結果との関係を示す箱ひげ図である。It is a box and whisker plot which shows the relationship between the value which shows the volume calculated in the comparative example, and the result of a hearing evaluation. 実施例で算出された音量を示す値と、聴覚評価の結果との関係を示す箱ひげ図である。6 is a box and whisker plot showing the relationship between the value indicating the volume calculated in the example and the result of the hearing evaluation. FIG. 聴覚評価の結果と比較例で算出された音量を示す値との関係を、ヒストグラムで示した図である。It is the figure which showed the relationship between the result of hearing evaluation and the value which shows the volume calculated by the comparative example with the histogram. 聴覚評価の結果と実施例で算出された音量を示す値との関係を、ヒストグラムで示した図である。It is the figure which showed the relationship between the result of hearing evaluation and the value which shows the sound volume calculated by the example with the histogram.
 以下、本発明の実施の形態について、図面を用いて説明する。尚、すべての図面において、同様な構成要素には同様の符号を付し、適宜説明を省略する。 Embodiments of the present invention will be described below with reference to the drawings. In all the drawings, the same reference numerals are given to the same components, and the description thereof will be omitted as appropriate.
 以下に示す説明において、処理装置10の各構成要素は、特に説明する場合を除き、ハードウエア単位の構成ではなく、機能単位のブロックを示している。処理装置10の各構成要素は、任意のコンピュータのCPU、メモリ、メモリにロードされたプログラム、そのプログラムを格納するハードディスクなどの記憶メディア、ネットワーク接続用インタフェースを中心にハードウエアとソフトウエアの任意の組合せによって実現される。そして、その実現方法、装置には様々な変形例がある。 In the following description, each component of the processing device 10 indicates a block of a functional unit rather than a configuration of a hardware unit unless otherwise specified. Each component of the processing device 10 includes an arbitrary computer such as a CPU, a memory, a program loaded in the memory, a storage medium such as a hard disk storing the program, and a network connection interface. It is realized by combination. Then, there are various modified examples of the realizing method and the apparatus.
(第1の実施形態)
 図1は、第1の実施形態に係る処理装置10の構成を例示する図である。本実施形態に係る処理装置10は、取得部110、区間特定部130、および算出部150を備える。取得部110は、呼吸音を含む一以上の音データを取得する。区間特定部130は、第1区間と第2区間との少なくとも一方を特定する。第1区間は呼吸が行われていると推定される区間であり、第2区間は複数の第1区間の間の区間である。算出部150は、対象の音データのうち第1区間に基づいて定められる第1部分と対象の音データのうち第2区間に基づいて定められる第2部分とを用いて、対象の音データの呼吸音量を示す音量情報を算出する。
(First embodiment)
FIG. 1 is a diagram illustrating a configuration of a processing device 10 according to the first embodiment. The processing device 10 according to the present embodiment includes an acquisition unit 110, a section identification unit 130, and a calculation unit 150. The acquisition unit 110 acquires one or more sound data including breath sounds. The section identifying unit 130 identifies at least one of the first section and the second section. The first section is a section estimated to be breathing, and the second section is a section between the plurality of first sections. The calculation unit 150 uses the first portion of the target sound data determined based on the first section and the second portion of the target sound data determined based on the second section of the target sound data. Volume information indicating the breath volume is calculated.
 図2は、第1の実施形態に係る処理方法を例示するフローチャートである。本方法は、取得ステップS110、区間特定ステップS130、および算出ステップS150を含む。取得ステップS110では、呼吸音を含む一以上の音データが取得される。区間特定ステップS130では、第1区間と第2区間との少なくとも一方が特定される。第1区間は呼吸が行われていると推定される区間であり、第2区間は複数の第1区間の間の区間である。算出ステップS150では、対象の音データのうち第1区間に基づいて定められる第1部分と対象の音データのうち第2区間に基づいて定められる第2部分とを用いて、対象の音データの、呼吸音量を示す音量情報が算出される。 FIG. 2 is a flowchart illustrating the processing method according to the first embodiment. The method includes an acquisition step S110, a section identification step S130, and a calculation step S150. In the acquisition step S110, one or more sound data including breath sounds are acquired. In the section specifying step S130, at least one of the first section and the second section is specified. The first section is a section estimated to be breathing, and the second section is a section between the plurality of first sections. In the calculation step S150, by using the first portion of the target sound data determined based on the first section and the second portion of the target sound data determined based on the second section, , Volume information indicating the breath volume is calculated.
 本処理方法は、処理装置10により実行可能である。 This processing method can be executed by the processing device 10.
 呼吸音量の算出としてはたとえば、生体信号に特定の周波数成分を除去するフィルタ処理を行い、呼吸音成分を抽出した後の信号パワーを求める方法がある。しかし、呼吸音成分とその他の生体音成分(脈動、心音、血流音等)を分離するカットオフ周波数を一意に決定することは難しい。フィルタ処理だけで呼吸音の抽出を行おうとする場合に適切なカットオフ周波数は、生体音を取得する部位の違いや、測定対象者の個人差によって異なるからである。 As a method of calculating the breathing volume, for example, there is a method of performing a filtering process for removing a specific frequency component from a biological signal and obtaining a signal power after extracting a breathing sound component. However, it is difficult to uniquely determine the cutoff frequency that separates the respiratory sound component and other biological sound components (pulsation, heart sound, blood flow sound, etc.). This is because the appropriate cutoff frequency in the case of attempting to extract the breathing sound only by the filter processing differs depending on the part where the body sound is acquired and the individual difference of the measurement target person.
 また、呼吸音成分の周波数帯域とその他の生体音成分の周波数帯域は、どの部位で検出した生体音においても互いに重なっている。一つの計測例では、頸部に取り付けたセンサで得られた生体音において、呼吸音成分が0Hzから1500Hzまでの帯域に現れ、脈動および血流音成分が0Hzから200Hzまでの帯域に現れた。また、胸部右上に取り付けたセンサで得られた生体音において、呼吸音成分が0Hzから300Hzまでの帯域に現れ、心音成分が0Hzから500Hzまでの帯域に現れた。胸部左上に取り付けたセンサで得られた生体音において、呼吸音成分が0Hzから300Hzまでの帯域に現れ、心音成分が0Hzから400Hzまでの帯域に現れた。そして、胸部右下に取り付けたセンサで得られた生体音において、呼吸音成分が0Hzから300Hzまでの帯域に現れ、心音成分が0Hzから300Hzまでの帯域に現れた。したがって、単に帯域を限定して信号パワーを求めても、他の成分の影響を避けられない。 Also, the frequency band of the respiratory sound component and the frequency band of other body sound components overlap each other in the body sound detected at any part. In one measurement example, in the body sound obtained by the sensor attached to the neck, the respiratory sound component appeared in the band from 0 Hz to 1500 Hz, and the pulsation and blood flow sound component appeared in the band from 0 Hz to 200 Hz. Further, in the body sound obtained by the sensor attached to the upper right of the chest, the respiratory sound component appeared in the band from 0 Hz to 300 Hz and the heart sound component appeared in the band from 0 Hz to 500 Hz. In the body sounds obtained by the sensor attached to the upper left of the chest, respiratory sound components appeared in the band from 0 Hz to 300 Hz, and heart sound components appeared in the band from 0 Hz to 400 Hz. Then, in the body sound obtained by the sensor attached to the lower right part of the chest, the respiratory sound component appeared in the band from 0 Hz to 300 Hz and the heart sound component appeared in the band from 0 Hz to 300 Hz. Therefore, the influence of other components cannot be avoided even if the signal power is obtained by simply limiting the band.
 本実施形態に係る処理装置10では、取得部110がたとえば生体に取り付けられたセンサから音データを取得する。区間特定部130は、第1区間と第2区間とを特定する。第1区間は、生体において息を吸うまたは息を吐くという動作が行われていると推定される区間である。そして、第2区間は第1区間と第1区間の間の区間である。より詳しくは、第2区間は第1区間以外の区間である。すなわち第2区間は、呼吸が行われていないと推定される区間であり、呼吸と呼吸の間で一時的に無呼吸となったと推定される区間である。なお、第2区間は必ずしも第1区間と第1区間との間にある必要は無い。たとえば、音データの端部が第2区間として特定されても良い。 In the processing device 10 according to the present embodiment, the acquisition unit 110 acquires sound data from, for example, a sensor attached to a living body. The section identifying unit 130 identifies the first section and the second section. The first section is a section in which it is estimated that the living body is inhaling or exhaling. The second section is a section between the first section and the first section. More specifically, the second section is a section other than the first section. That is, the second section is a section in which it is estimated that breathing is not performed, and is a section in which it is estimated that apnea is temporarily made between breathing. The second section does not necessarily have to be between the first section and the first section. For example, the end of the sound data may be specified as the second section.
 図3は、音データの一例を画像表示した図である。本図では、表示領域501に音データの時間波形が示されており、表示領域502に音データのスペクトログラムが示されている。スペクトログラムにおいて横軸は時刻(時間)であり、縦軸は周波数であり、各周波数成分の強度が輝度で示されている。なお、時間波形の横軸とスペクトログラムの横軸は揃っている。また、第1区間として特定される区間を表示領域502において矢印で示している。一方、矢印が付されていない区間が第2区間である。このように、区間とは時刻範囲を示す。また、音データには互いに離間した複数の第1区間が存在し、互いに離間した複数の第2区間が存在する。 FIG. 3 is a diagram showing an image of an example of sound data. In this figure, the time waveform of the sound data is shown in the display area 501, and the spectrogram of the sound data is shown in the display area 502. In the spectrogram, the horizontal axis represents time (time), the vertical axis represents frequency, and the intensity of each frequency component is represented by luminance. The horizontal axis of the time waveform and the horizontal axis of the spectrogram are aligned. Further, the section identified as the first section is indicated by an arrow in the display area 502. On the other hand, the section without the arrow is the second section. In this way, the section indicates a time range. In addition, the sound data includes a plurality of first sections that are separated from each other and a plurality of second sections that are separated from each other.
 ここで、呼吸が行われていると推定される部分には、呼吸音成分とその他の生体音成分が含まれ、無呼吸と推定される部分にはその他の生体音成分のみが含まれると考えられる。したがって、呼吸が行われていると推定される部分のデータと、それ以外の部分のデータとを対比することにより、その他の生体音成分の影響が低減された音量情報を得ることができる。この様に算出された音量情報は、たとえば人が聴覚で聞き取る場合に感じる呼吸音の音量と、高い相関を持つ。このような音量情報を用いれば、たとえば生体の異常等をデータ処理により検知しやすくなり、たとえば診断の補助や、患者の様態のモニタリングに有益である。 Here, it is considered that the part estimated to be breathing contains a respiratory sound component and other body sound components, and the part estimated to be apnea contains only other body sound components. Be done. Therefore, by comparing the data of the portion estimated to be breathing with the data of the other portion, it is possible to obtain the volume information in which the influence of other body sound components is reduced. The volume information thus calculated has a high correlation with, for example, the volume of a breathing sound that a person feels when hearing with his or her hearing. The use of such volume information makes it easier to detect, for example, an abnormality of a living body by data processing, and is useful for assisting diagnosis and monitoring the condition of a patient.
 処理装置10における詳しい処理内容は後述の第2の実施形態以降で詳しく説明する。 Detailed processing contents in the processing device 10 will be described in detail in the second and later embodiments described later.
 以上、本実施形態によれば、算出部150は、対象の音データのうち第1区間に基づいて定められる第1部分と対象の音データのうち第2区間に基づいて定められる第2部分とを用いて、対象の音データの呼吸音量を示す音量情報を算出する。したがって、生体音のデータから、人の聴覚で感じる音量と近い呼吸音量を算出することができる。 As described above, according to the present embodiment, the calculation unit 150 includes the first portion of the target sound data that is determined based on the first section and the second portion of the target sound data that is determined based on the second section. Is used to calculate volume information indicating the respiratory volume of the target sound data. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
(第2の実施形態)
 図4は、第2の実施形態に係る処理装置10およびシステム20の構成を例示する図である。本実施形態に係る処理装置10は、第1の実施形態に係る処理装置10の構成を有する。図5は、第2の実施形態に係る処理装置10で実行される処理方法を例示するフローチャートである。本実施形態に係る処理方法は第1の実施形態に係る処理方法の構成を有する。
(Second embodiment)
FIG. 4 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the second embodiment. The processing device 10 according to the present embodiment has the configuration of the processing device 10 according to the first embodiment. FIG. 5 is a flowchart illustrating a processing method executed by the processing device 10 according to the second embodiment. The processing method according to this embodiment has the configuration of the processing method according to the first embodiment.
 本実施形態に係るシステム20は、処理装置10とセンサ210とを備える。そして、取得部110は、センサ210で検出された音を示す音データを取得する。 The system 20 according to this embodiment includes a processing device 10 and a sensor 210. Then, the acquisition unit 110 acquires sound data indicating the sound detected by the sensor 210.
 処理装置10およびシステム20の各構成要素について、以下に詳しく説明する。 Each component of the processing device 10 and the system 20 will be described in detail below.
 センサ210は呼吸音を含む生体音を検出する。センサ210では生体音を示す電気信号が生成され、音データとして出力される。センサ210はたとえば、マイクロフォンまたは振動センサである。振動センサはたとえば変位センサ、速度センサ、または加速度センサである。マイクロフォンは生体音に起因した空気の振動を電気信号に変換する。この電気信号の信号レベル値は、空気の振動の音圧を示す。一方、振動センサは、生体音に起因した媒質(例えば、対象者の体表)の振動を電気信号に変換する。この電気信号の信号レベル値は、媒質の振動変位を直接的または間接的に示す。たとえば、振動センサがダイヤフラムを備える場合、媒質の振動がダイヤフラムに伝達され、ダイヤフラムの振動が電気信号に変換される。なお、電気信号はアナログ信号であっても良いしデジタル信号であっても良い。また、センサ210は、電気信号を処理する回路等を含んで構成されても良い。電気信号を処理する回路としてはたとえばA/D変換回路およびフィルタ回路が挙げられる。ただし、A/D変換等は処理装置10で行われても良い。 The sensor 210 detects body sounds including breath sounds. The sensor 210 generates an electrical signal indicating the body sound and outputs it as sound data. The sensor 210 is, for example, a microphone or a vibration sensor. The vibration sensor is, for example, a displacement sensor, a speed sensor, or an acceleration sensor. The microphone converts air vibrations caused by body sounds into electric signals. The signal level value of this electric signal indicates the sound pressure of the vibration of the air. On the other hand, the vibration sensor converts the vibration of the medium (for example, the body surface of the subject) caused by the body sound into an electric signal. The signal level value of this electric signal directly or indirectly indicates the vibration displacement of the medium. For example, when the vibration sensor includes a diaphragm, the vibration of the medium is transmitted to the diaphragm and the vibration of the diaphragm is converted into an electric signal. The electric signal may be an analog signal or a digital signal. Further, the sensor 210 may be configured to include a circuit or the like that processes an electric signal. Examples of circuits that process electric signals include A / D conversion circuits and filter circuits. However, the A / D conversion and the like may be performed by the processing device 10.
 音データは、電気信号を示すデータであり、センサ210で得られた電気信号に基づく信号レベル値を時系列で示すデータである。すなわち、音データは音波の波形を表す。なお、一つの音データとは、時間的に一続きの音データを意味する。 The sound data is data indicating an electric signal, and is data indicating a signal level value based on the electric signal obtained by the sensor 210 in time series. That is, the sound data represents the waveform of a sound wave. In addition, one sound data means a continuous sound data in time.
 また、センサ210はたとえば電子聴診器である。センサ210はたとえば測定者により対象者の生体のうち生体音を測定したい部位に押し当てられる、または貼り付けられる。本実施形態では、取得部110が、連続する一の音データのみを取得する例について説明する。 The sensor 210 is, for example, an electronic stethoscope. The sensor 210 is pressed or attached to, for example, a part of the subject's living body where the body sound is to be measured by the measurer. In the present embodiment, an example in which the acquisition unit 110 acquires only one continuous sound data will be described.
 取得部110は取得ステップS110において、たとえば音データをセンサ210から取得する。この場合、取得部110はセンサ210で検出された音データをリアルタイムで取得することができる。ただし、予めセンサ210で測定され、記憶装置に保持された音データを、取得部110が読み出して取得しても良い。記憶装置は処理装置10の内部に設けられていても良いし外部に設けられていても良い。処理装置10の内部に設けられる記憶装置はたとえば後述する計算機1000のストレージデバイス1080である。また、センサ210から出力され、処理装置10内や処理装置10以外の装置で変換処理等がされた音データを取得部110が取得しても良い。変換処理としてはたとえば増幅処理や、A/D変換処理が挙げられる。 The acquisition unit 110 acquires, for example, sound data from the sensor 210 in the acquisition step S110. In this case, the acquisition unit 110 can acquire the sound data detected by the sensor 210 in real time. However, the acquisition unit 110 may read and acquire sound data that is measured by the sensor 210 in advance and is stored in the storage device. The storage device may be provided inside the processing device 10 or may be provided outside the processing device 10. The storage device provided inside the processing device 10 is, for example, the storage device 1080 of the computer 1000 described later. Further, the acquisition unit 110 may acquire sound data output from the sensor 210 and subjected to conversion processing or the like in the processing device 10 or a device other than the processing device 10. Examples of the conversion processing include amplification processing and A / D conversion processing.
 取得部110は、生体音を含む音データをセンサ210からたとえば連続的に取得する。なお、音データの各信号レベル値は録音時刻と関連づけられる。音データにはセンサ210において時刻が関連づけられても良いし、音データがセンサ210からリアルタイムで取得される場合、取得部110が音データの取得時刻をその音データに関連づけてもよい。 The acquisition unit 110 continuously acquires sound data including body sounds from the sensor 210, for example. It should be noted that each signal level value of the sound data is associated with the recording time. The time may be associated with the sound data in the sensor 210, or when the sound data is acquired from the sensor 210 in real time, the acquisition unit 110 may associate the acquisition time of the sound data with the sound data.
 取得部110で取得された音データのうち、音量情報の算出対象とする音データを特に対象の音データとも呼ぶ。なお、本実施形態において、取得部110は一の音データのみを取得するため、取得された音データがすなわち対象の音データである。取得部110は、以降のステップS120、区間特定ステップS130、および算出ステップS150が行われている間も継続して音データの取得を行うことができる。取得された音データに対しては、冒頭部分から順に以下の処理が行われる。 Of the sound data acquired by the acquisition unit 110, the sound data for which the volume information is calculated is also referred to as target sound data. Note that in the present embodiment, the acquisition unit 110 acquires only one sound data, and thus the acquired sound data is the target sound data. The acquisition unit 110 can continuously acquire the sound data while the subsequent step S120, the section identification step S130, and the calculation step S150 are performed. The following processing is performed on the acquired sound data in order from the beginning.
 本図の例において、処理装置10は、フィルタ処理部120をさらに備える。取得部110による音データの取得が開始されると、ステップS120が開始される。ステップS120においてフィルタ処理部120は、少なくとも対象の音データに対し、第1のフィルタ処理を行う。具体的にはフィルタ処理部120は、低周波数側のカットオフ周波数をfL1[Hz]とし、高周波数側のカットオフ周波数をfH1[Hz]とするバンドパスフィルタ処理を行う。第1のフィルタ処理ではたとえば、音データに対してフーリエ変換を行い、周波数空間上でfL1[Hz]以下の帯域およびfH1[Hz]以上の帯域を除去する。その上で、逆フーリエ変換で時間軸波形に戻す。フーリエ変換はたとえば高速フーリエ変換(FFT)である。ただし、第1のフィルタ処理は上記の例に限定されず、たとえばFIR(Finite Impulse Response)フィルタまたはIIR(Infinite Impulse Response)フィルタによる処理であっても良い。 In the example of the figure, the processing device 10 further includes a filter processing unit 120. When the acquisition of the sound data by the acquisition unit 110 is started, step S120 is started. In step S120, the filter processing unit 120 performs the first filter processing on at least the target sound data. Specifically, the filter processing unit 120 performs bandpass filter processing in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz]. In the first filter processing, for example, Fourier transform is performed on the sound data to remove the band below f L1 [Hz] and the band above f H1 [Hz] in the frequency space. After that, the time-axis waveform is restored by the inverse Fourier transform. The Fourier transform is, for example, a fast Fourier transform (FFT). However, the first filter process is not limited to the above example, and may be a process using an FIR (Finite Impulse Response) filter or an IIR (Infinite Impulse Response) filter, for example.
 本図の例において、算出部150は、第1のフィルタ処理がされた後の対象の音データの第1部分および第2部分を用いて音量情報を算出する。ここで、50≦fL1≦150が成り立つことが好ましく、また、500≦fH1≦1500が成り立つことが好ましい。第1のフィルタ処理により、音データに含まれるノイズを除去することができる。なお、第1のフィルタ処理により呼吸音成分のみを抽出する必要は無い。第1のフィルタ処理後の音データには呼吸音以外の成分が含まれても良い。 In the example of the figure, the calculation unit 150 calculates the volume information using the first portion and the second portion of the target sound data after the first filter processing. Here, it is preferable that 50 ≦ f L1 ≦ 150 holds, and that 500 ≦ f H1 ≦ 1500 holds. Noise included in the sound data can be removed by the first filtering process. Note that it is not necessary to extract only the respiratory sound component by the first filter processing. The sound data after the first filter processing may include components other than respiratory sounds.
 区間特定部130は、第1区間と第2区間との少なくとも一方を特定する。図4および図5の例において区間特定部130は、取得部110で取得された音データに基づいて第1区間と第2区間との少なくとも一方を特定する。ただし、区間特定部130は音データを用いずに第1区間と第2区間との少なくとも一方を特定してもよい。区間特定部130が区間を特定する方法については詳しく後述する。 The section specifying unit 130 specifies at least one of the first section and the second section. In the example of FIGS. 4 and 5, the section identifying unit 130 identifies at least one of the first section and the second section based on the sound data acquired by the acquiring section 110. However, the section specifying unit 130 may specify at least one of the first section and the second section without using the sound data. A method of specifying the section by the section specifying unit 130 will be described later in detail.
 さらに区間特定部130は、区間の特定結果に基づいて、第1区間の時刻範囲を示す第1時刻情報と第2区間の時刻範囲を示す第2時刻情報との少なくとも一方を生成する。たとえば、区間特定部130が第1区間を特定する場合区間特定部130は第1時刻情報を生成し、区間特定部130が第2区間を特定する場合区間特定部130は第2時刻情報を生成する。なお、区間特定部130において非連続な複数の第1区間が特定された場合には、複数の第1時刻情報が生成される。また、区間特定部130において非連続な複数の第2区間が特定された場合には、複数の第2時刻情報が生成される。 Further, the section specifying unit 130 generates at least one of first time information indicating the time range of the first section and second time information indicating the time range of the second section based on the result of specifying the section. For example, when the section specifying unit 130 specifies the first section, the section specifying unit 130 generates the first time information, and when the section specifying unit 130 specifies the second section, the section specifying unit 130 generates the second time information. To do. When the section specifying unit 130 specifies a plurality of discontinuous first sections, a plurality of pieces of first time information are generated. Further, when the section specifying unit 130 specifies a plurality of discontinuous second sections, a plurality of second time information is generated.
 算出ステップS150において算出部150は、取得部110で取得された対象の音データのうち、第1領域におけるデータを用いて音量情報を算出する。第1領域とは、たとえば算出部150が本ステップを行う時点における直近の時間Tの間の音を示す領域である。Tは特に限定されないが、たとえば2秒以上30秒以下である。 In the calculation step S150, the calculation unit 150 calculates the volume information using the data in the first area among the target sound data acquired by the acquisition unit 110. The first region is, for example, a region indicating the sound during the latest time T 1 when the calculation unit 150 performs this step. Although T 1 is not particularly limited, it is, for example, 2 seconds or more and 30 seconds or less.
 上記した通り、算出部150は、区間特定部130で生成された第1時刻情報と第2時刻情報の少なくとも一方を用い、対象の音データの第1領域において第1部分と第2部分を特定する。 As described above, the calculation unit 150 uses at least one of the first time information and the second time information generated by the section specifying unit 130 to specify the first portion and the second portion in the first region of the target sound data. To do.
 区間特定部130が第1時刻情報および第2時刻情報を生成する場合、算出部150は、対象の音データの第1領域のうち第1時刻情報が示す時刻範囲の部分を第1部分とし、第1領域のうち第2時刻情報が示す時刻範囲の部分を第2部分とする。また、区間特定部130が第1時刻情報のみを生成する場合、算出部150は、第1領域のうち第1時刻情報が示す時刻範囲の部分を第1部分とし、第1領域のそれ以外の部分を第2部分とする。そして、また、区間特定部130が第2時刻情報のみを生成する場合、算出部150は、第1領域のうち第2時刻情報が示す時刻範囲の部分を第2部分とし、第1領域のそれ以外の部分を第1部分とする。 When the section identifying unit 130 generates the first time information and the second time information, the calculating unit 150 sets the part of the time range indicated by the first time information in the first region of the target sound data as the first part, A part of the time range indicated by the second time information in the first area is referred to as a second part. In addition, when the section identifying unit 130 generates only the first time information, the calculation unit 150 sets the part of the time range indicated by the first time information in the first region as the first part, and the other parts of the first region. Let a part be a 2nd part. When the section identifying unit 130 generates only the second time information, the calculation unit 150 sets the portion of the time range indicated by the second time information in the first area as the second portion, and the portion of the first area The part other than is the first part.
 そして算出部150は、対象の音データの第1部分の強度である第1信号強度と、対象の音データの第2部分の強度である第2信号強度とを算出する。具体的にはたとえば、算出部150は、対象の音データの第1部分のRMS(root mean square)を第1信号強度として算出する。また、算出部150は対象の音データの第2部分のRMSを第2信号強度として算出する。なお、算出部150はRMSの代わりにpeak to peak 値等、他の指標を信号強度として算出しても良い。ただし、第1信号強度の算出方法と第2信号強度の算出方法は同様とする。 Then, the calculation unit 150 calculates the first signal strength that is the strength of the first portion of the target sound data and the second signal strength that is the strength of the second portion of the target sound data. Specifically, for example, the calculation unit 150 calculates the RMS (root mean square) of the first portion of the target sound data as the first signal strength. Further, the calculation unit 150 calculates the RMS of the second portion of the target sound data as the second signal strength. Note that the calculation unit 150 may calculate another index such as a peak-to-peak value as the signal strength instead of the RMS. However, the calculation method of the first signal strength and the calculation method of the second signal strength are the same.
 なお、処理範囲に第1部分が二つ以上含まれる場合、第1信号強度は、たとえば全ての第1部分を連続した一つの信号と見た場合の、信号強度を示す値である。また、処理範囲に第2部分が二つ以上含まれる場合、第2信号強度は、たとえば全ての第2部分を連続した一つの信号と見た場合の、信号強度を示す値である。 When the processing range includes two or more first portions, the first signal strength is a value indicating the signal strength when, for example, all the first portions are regarded as one continuous signal. When the processing range includes two or more second portions, the second signal strength is a value indicating the signal strength when, for example, all the second portions are regarded as one continuous signal.
 そして算出部150は、第1信号強度と第2信号強度とを用いて対象の音データの音量情報を算出する。音量情報は他の機器等で測定される絶対的な音量(たとえばdB SPL等)である必要は無い。音量情報は、少なくとも処理装置10で得られた音量情報同士で比較可能な、相対的な値であればよい。 Then, the calculation unit 150 calculates the volume information of the target sound data using the first signal strength and the second signal strength. The volume information does not have to be an absolute volume measured by another device or the like (for example, dB SPL, etc.). The volume information may be at least a relative value with which the volume information obtained by the processing device 10 can be compared with each other.
 具体的には算出部150は、第1信号強度の第2信号強度に対する比を特定する情報、および第1信号強度と第2信号強度との差を特定する情報の少なくとも一方を対象の音データの音量情報として算出する。ただし算出部150は、第1信号強度の第2信号強度に対する比を特定する情報を少なくとも音量情報として算出することが好ましい。そうすることで、音量情報を、通常の音量と同様にdB単位で表記することができ、人の聴覚で感じる音量により近い情報とすることができる。第1信号強度の第2信号強度に対する比を特定する情報は、たとえば第1信号強度を第2信号強度で除した値、第2信号強度を第1信号強度で除した値、およびこれらの値のいずれかをデシベル表示した値のいずれかである。 Specifically, the calculation unit 150 sets at least one of the information specifying the ratio of the first signal strength to the second signal strength and the information specifying the difference between the first signal strength and the second signal strength as the target sound data. It is calculated as the sound volume information. However, it is preferable that the calculator 150 calculates at least the information that specifies the ratio of the first signal strength to the second signal strength as the volume information. By doing so, the volume information can be expressed in dB as in the normal volume, and the information can be closer to the volume perceived by human hearing. The information specifying the ratio of the first signal strength to the second signal strength is, for example, a value obtained by dividing the first signal strength by the second signal strength, a value obtained by dividing the second signal strength by the first signal strength, and these values. Is one of the values displayed in decibels.
 算出部150は同様にして、たとえば時間T毎に音量情報を算出する。Tは特に限定されないが、たとえば1秒以上10秒以下である。 The calculation unit 150 similarly calculates the volume information for each time T 2 . T 2 is not particularly limited, but is, for example, 1 second or more and 10 seconds or less.
 算出部150で算出された音量情報はたとえば表示装置に表示される。ここで、算出部150が対象の音データの音量情報を複数、時系列に算出し、複数の音量情報を時系列に示すグラフが表示装置に表示されても良い。ただし、音量情報は数値で表示されても良い。また、算出部150で算出された音量情報は、記憶装置に記憶されても良いし、処理装置10以外の装置に対して出力されても良い。 The volume information calculated by the calculation unit 150 is displayed on a display device, for example. Here, the calculation unit 150 may calculate a plurality of volume information of the target sound data in time series, and a graph showing the plurality of volume information in time series may be displayed on the display device. However, the volume information may be displayed numerically. The volume information calculated by the calculation unit 150 may be stored in the storage device or may be output to a device other than the processing device 10.
 図6は、区間特定ステップS130の処理内容を詳しく例示するフローチャートである。また、図7(a)から図9(b)は、本実施形態に係る、区間特定ステップS130の処理内容の例を説明するための図である。なお、図7(a)から図7(d)、図9(a)、および図9(b)において、横軸は基準時刻からの経過時間を表示している。図6から図9(b)を参照し、区間特定部130が区間を特定する方法の例について以下に詳しく説明する。 FIG. 6 is a flowchart showing in detail the processing contents of the section identifying step S130. Further, FIGS. 7A to 9B are diagrams for explaining an example of the processing content of the section identifying step S130 according to the present embodiment. 7A to 7D, 9A, and 9B, the horizontal axis represents the elapsed time from the reference time. An example of a method for the section specifying unit 130 to specify a section will be described in detail below with reference to FIGS. 6 to 9B.
 区間特定部130は、区間特定ステップS130において、第1区間と第2区間との少なくとも一方を、第1の音データの振幅を示す値についての閾値を用いて特定する。振幅を示す値は第1の音データの各時刻の振動の大きさを示す値である。そして区間特定部130は、閾値を第1の音データに基づいて決定する。 In the section specifying step S130, the section specifying unit 130 specifies at least one of the first section and the second section using a threshold value for the value indicating the amplitude of the first sound data. The value indicating the amplitude is a value indicating the magnitude of vibration of the first sound data at each time. Then, the section identifying unit 130 determines the threshold value based on the first sound data.
 取得部110で取得された音データには、第1の音データと対象の音データとが含まれる。ここで、第1の音データは区間の特定に用いられる音データであり、対象の音データは、音量情報の算出対象となる音データである。取得部110が一の音データのみを取得する場合、第1の音データと対象の音データはいずれもこの一の音データであり、取得部110における取得時において互いに同一である。本実施形態において、取得部110で取得される音データは頸部で取得された生体音のデータであることが好ましい。そうすれば、区間の特定をより正確に行うことができる。頸部やその近辺では、呼吸音成分が高い比率で検出できるからである。 The sound data acquired by the acquisition unit 110 includes the first sound data and the target sound data. Here, the first sound data is sound data used for specifying a section, and the target sound data is sound data for which volume information is calculated. When the acquisition unit 110 acquires only one sound data, both the first sound data and the target sound data are this one sound data, and are the same at the time of acquisition by the acquisition unit 110. In the present embodiment, it is preferable that the sound data acquired by the acquisition unit 110 is body sound data acquired by the neck. Then, the section can be specified more accurately. This is because the respiratory sound component can be detected at a high rate in the neck and its vicinity.
 区間特定部130は、取得部110で取得された第1の音データに対しステップS131において第2のフィルタ処理を行う。第2のフィルタ処理ではたとえば、音データに対してフーリエ変換を行い、周波数空間上でfL2[Hz]以下の帯域およびfH2[Hz]以上の帯域を除去する。その上で、逆フーリエ変換で時間軸波形に戻す。フーリエ変換はたとえば高速フーリエ変換(FFT)である。ただし、第2のフィルタ処理は上記の例に限定されず、たとえばFIRフィルタまたはIIRフィルタによる処理であっても良い。 The section identifying unit 130 performs the second filtering process on the first sound data acquired by the acquiring unit 110 in step S131. In the second filter process, for example, Fourier transform is performed on the sound data to remove the band of f L2 [Hz] or less and the band of f H2 [Hz] or more in the frequency space. After that, the time-axis waveform is restored by the inverse Fourier transform. The Fourier transform is, for example, a fast Fourier transform (FFT). However, the second filter processing is not limited to the above example, and may be processing by an FIR filter or an IIR filter, for example.
 区間特定部130は第2のフィルタ処理を行った後の第1の音データに基づき、後述する最頻値を求める。第2のフィルタ処理は、低周波数側のカットオフ周波数をfL2[Hz]とし、高周波数側のカットオフ周波数をfH2[Hz]とするバンドパスフィルタ処理である。ここで、150≦fL2≦250が成り立つことが好ましく、また、550≦fH2≦650が成り立つことが好ましい。第2のフィルタ処理により、第1の音データに含まれる呼吸音成分を主に抽出することができる。なお、第2のフィルタ処理により呼吸音成分のみを抽出する必要は無い。また、第2のフィルタ処理後の第1の音データには呼吸音以外の成分が含まれても良い。 The section identifying unit 130 obtains a mode value, which will be described later, based on the first sound data that has been subjected to the second filtering process. The second filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz]. Here, it is preferable that 150 ≦ f L2 ≦ 250 holds, and it is preferable that 550 ≦ f H2 ≦ 650 holds. By the second filter processing, the respiratory sound component included in the first sound data can be mainly extracted. Note that it is not necessary to extract only the respiratory sound component by the second filter processing. Further, the first sound data after the second filter processing may include components other than breath sounds.
 図7(a)は取得部110での取得時の第1の音データの波形を例示する図であり、図7(b)は、図7(a)の波形に第2のフィルタ処理を施した後の波形を例示する図である。図7(b)に矢印で示したように、第2のフィルタ処理により呼吸音成分が主に抽出される。 FIG. 7A is a diagram exemplifying the waveform of the first sound data at the time of acquisition by the acquisition unit 110, and FIG. 7B is the waveform of FIG. 7A subjected to the second filter processing. It is a figure which illustrates the waveform after doing. As shown by the arrow in FIG. 7B, the respiratory sound component is mainly extracted by the second filter processing.
 なお、取得部110で取得された第1の音データには、フィルタ処理部120によるステップS120の第1のフィルタ処理と、区間特定部130によるステップS131の第2のフィルタ処理との両方が行われても良い。ただし、第2のフィルタ処理の透過帯域の方が第1のフィルタ処理の透過帯域よりも狭い場合、第1のフィルタ処理の有無は区間の特定結果に影響しない。 It should be noted that the first sound data acquired by the acquisition unit 110 is subjected to both the first filtering process of step S120 by the filtering unit 120 and the second filtering process of step S131 by the section identifying unit 130. You may be broken. However, if the pass band of the second filter process is narrower than the pass band of the first filter process, the presence or absence of the first filter process does not affect the section identification result.
 次いで、ステップS132において区間特定部130は、ステップS131で得られたデータの絶対値を算出する。すなわち、時系列データにおける各信号レベル値が絶対値に変換される。図7(c)は図7(b)のデータの絶対値を算出した結果を示す図である。 Next, in step S132, the section identifying unit 130 calculates the absolute value of the data obtained in step S131. That is, each signal level value in the time series data is converted into an absolute value. FIG. 7C is a diagram showing the result of calculating the absolute value of the data of FIG. 7B.
 次いで、ステップS133において、区間特定部130は、ステップS132で得られたデータに対しダウンサンプリング処理を行う。そうすることにより、図7(d)のようにデータ波形の輪郭が得られる。そして、ダウンサンプリング処理により得られるデータの各データ点が、第1の音データの各時刻の振幅を示す値に相当する。区間特定部130はこのように第1の音データに少なくともダウンサンプリング処理を行って得られるデータに基づき、後述する最頻値を求める。ここで図7(d)には、呼吸が行われていると推定される部分を下向き矢印で示し、呼吸が行われていないと推定される部分を上向き矢印で示している。最終的に区間特定部130でこれらが識別できればよい。 Next, in step S133, the section identifying unit 130 performs downsampling processing on the data obtained in step S132. By doing so, the contour of the data waveform is obtained as shown in FIG. Then, each data point of the data obtained by the downsampling process corresponds to a value indicating the amplitude at each time of the first sound data. The section identifying unit 130 obtains a mode value described later based on the data obtained by performing at least the downsampling process on the first sound data in this way. Here, in FIG. 7D, a portion estimated to be breathing is indicated by a downward arrow, and a portion estimated to be not breathing is indicated by an upward arrow. Finally, it is sufficient that the section identifying unit 130 can identify these.
 次いで、区間特定部130はステップS134において、予め定められた更新条件に基づき、閾値の更新(決定)を行うか否かを判定する。具体的には、たとえば、その第1の音データに対して一度も区間特定部130による閾値の決定がされていない場合、区間特定部130は更新条件が満たされると判定する。一方、その第1の音データに対して少なくとも一度、区間特定部130による閾値の決定がされている場合、区間特定部130は更新条件が満たされないと判定する。 Next, in step S134, the section identifying unit 130 determines whether to update (determine) the threshold value based on a predetermined update condition. Specifically, for example, when the section specifying unit 130 has never determined the threshold value for the first sound data, the section specifying unit 130 determines that the update condition is satisfied. On the other hand, when the section identifying unit 130 determines the threshold value for the first sound data at least once, the section identifying unit 130 determines that the update condition is not satisfied.
 更新条件が満たされると判定された場合(ステップS134のYes)、区間特定部130はステップS135に進み、閾値を定めるための処理を行う。一方、更新条件が満たされないと判定された場合(ステップS134のNo)、区間特定部130はすでに定められた閾値を用いてステップS137を行う。 When it is determined that the update condition is satisfied (Yes in step S134), the section identifying unit 130 proceeds to step S135 and performs a process for determining a threshold value. On the other hand, when it is determined that the update condition is not satisfied (No in step S134), the section identifying unit 130 performs step S137 using the threshold value that has already been set.
 区間特定部130が閾値を定める処理について以下に説明する。区間特定部130は、第1のデータにおいて振幅を示す値の最頻値を求める。具体的には、区間特定部130はステップS135において、第1の音データの振幅を示す値の、最頻値を求める。そのために区間特定部130は、第1の音データのうち所定の時刻範囲における各振幅を示す値の出現回数をカウントする。そして区間特定部130は、出現回数が最も多かった振幅を示す値を、最頻値として求める。所定の時刻範囲とは、たとえば取得部110が第1の音データを取得し始めてから、区間特定部130が本ステップを行う時点までの範囲であってもよいし、区間特定部130が本ステップを行う時点における直近の時間Tであってもよい。Tは特に限定されないが、たとえば2秒以上30秒以下である。また、たとえばTはTと同じであってもよい。第1の音データを対象の音データとする場合、所定の時刻範囲の領域と上記した第1領域とを一致させても良い。 The process in which the section identifying unit 130 determines the threshold will be described below. The section identifying unit 130 obtains the mode value of the values indicating the amplitude in the first data. Specifically, in step S135, the section identifying unit 130 obtains the mode value of the values indicating the amplitude of the first sound data. Therefore, the section identifying unit 130 counts the number of times of appearance of a value indicating each amplitude in the predetermined time range in the first sound data. Then, the section identifying unit 130 obtains, as the mode value, the value indicating the amplitude having the largest number of appearances. The predetermined time range may be, for example, a range from when the acquisition unit 110 starts acquiring the first sound data to when the section identification unit 130 performs this step, or the section identification unit 130 performs the present step. It may be the latest time T 3 at the time of performing. T 3 is not particularly limited, but is, for example, 2 seconds or more and 30 seconds or less. Further, for example, T 3 may be the same as T 1 . When the first sound data is the target sound data, the area in the predetermined time range and the first area may be matched.
 なお上記した通り、取得部110は記憶装置に保持された音データを読み出して取得しても良い。その場合、たとえば区間特定部130は音データ全体を用いて最頻値の特定および閾値の決定を行っても良い。 Note that, as described above, the acquisition unit 110 may read out and acquire the sound data stored in the storage device. In that case, for example, the section identifying unit 130 may identify the mode value and the threshold value using the entire sound data.
 図8は、各振幅の出現回数を例示するヒストグラムである。このようなヒストグラムは、第1の音データを、横軸を振幅を示す値とし、縦軸を出現回数としたグラフに表したものに相当する。なお、振幅を示す値は上記の処理により得られる値に限らず、たとえばpeak to peak 値であっても良いし、規格化等がされた値であっても良い。 FIG. 8 is a histogram illustrating the number of appearances of each amplitude. Such a histogram corresponds to a graph of the first sound data in which the horizontal axis represents the amplitude and the vertical axis represents the number of appearances. The value indicating the amplitude is not limited to the value obtained by the above processing, and may be, for example, a peak-to-peak value or a standardized value.
 そして、区間特定部130はステップS136において、最頻値よりも大きい閾値を定める。具体的には、ヒストグラムにおいて、極小値をとる一以上の振幅を示す値のうち、最頻値に最も近い振幅を示す値が閾値とされる。なお、ヒストグラムにおける最頻値は、極大値をとる複数の振幅を示す値のうち最も小さい振幅を示す値となる。図8のグラフでは、出現回数が最大となる点505、および複数の極小値に丸印が付されている。本図の例においてこの点505をとる振幅を示す値が最頻値である。そして、極小値のうち最も低振幅側の極小値506をとる振幅を示す値が閾値として決定される。最頻値は、主に無呼吸区間に対応する振幅を示す値であると考えられる。したがって、このように閾値を決定することで、無呼吸区間とそれ以外の区間とを識別可能な閾値が得られる。また、この閾値は取得部110で取得された音データを用いて得られるため、生体音の個人差によらず、高精度な区間特定が実現する。なお、閾値は上記の例に限られない。たとえば閾値は、ヒストグラムにおいて低振幅側から2番目または3番目の極小値をとる振幅を示す値であっても良いし、点505の隣の極大値をとる振幅を示す値であっても良い。 Then, in step S136, the section identifying unit 130 sets a threshold value larger than the mode value. Specifically, in the histogram, the value indicating the amplitude closest to the mode value is set as the threshold value among the values indicating one or more amplitudes having the minimum value. The most frequent value in the histogram is the value showing the smallest amplitude among the plurality of values showing the maximum value. In the graph of FIG. 8, a point 505 having the maximum number of appearances and a plurality of minimum values are circled. In the example of this figure, the value indicating the amplitude at this point 505 is the mode value. Then, a value indicating the amplitude having the minimum value 506 on the lowest amplitude side among the minimum values is determined as the threshold value. The mode value is considered to be a value indicating the amplitude mainly corresponding to the apnea section. Therefore, by determining the threshold value in this manner, a threshold value capable of distinguishing the apnea section from the other sections can be obtained. Further, since this threshold value is obtained using the sound data acquired by the acquisition unit 110, highly accurate section identification is realized regardless of individual differences in body sounds. The threshold value is not limited to the above example. For example, the threshold may be a value indicating the amplitude having the second or third minimum value from the low amplitude side in the histogram, or may be a value indicating the amplitude having the maximum value next to the point 505.
 ステップS136において閾値が決定されると、または、ステップS134で更新条件が満たされないと判定されると、ステップS137において、区間特定部130は以下の第1処理および第2処理の少なくとも一方を行う。第1処理は、第1の音データの内、少なくとも振幅を示す値が閾値を超える区間を、第1区間として特定する処理である。一方、第2処理は、第1の音データの内、少なくとも振幅を示す値が閾値未満である区間を、第2区間として特定する処理である。なお、閾値に一致する区間は第1区間に含めてもよいし、第2区間に含めてもよい。 If the threshold value is determined in step S136 or if it is determined that the update condition is not satisfied in step S134, the section identifying unit 130 performs at least one of the following first process and second process in step S137. The first process is a process of identifying, as a first section, a section in which at least a value indicating the amplitude exceeds a threshold value in the first sound data. On the other hand, the second process is a process of identifying, as the second section, a section in which at least the value indicating the amplitude is less than the threshold value in the first sound data. The section that matches the threshold may be included in the first section or may be included in the second section.
 具体的には区間特定部130は、ステップS133の処理が行われた第1の音データに対して、閾値を適用し、連続して振幅を示す値が閾値を超える区間を一つの第1区間とする。一方、連続して振幅を示す値が閾値未満となる区間を一つの第2区間とする。なお、区間特定部130は、第1区間と第2区間の一方のみを特定しても良い。その場合、残りの区間が他方の区間となる。 Specifically, the section identifying unit 130 applies a threshold value to the first sound data that has been subjected to the process of step S133, and one section in which the value indicating the amplitude continuously exceeds the threshold value is one first section. And On the other hand, a section in which the value indicating the amplitude is continuously less than the threshold value is defined as one second section. The section specifying unit 130 may specify only one of the first section and the second section. In that case, the remaining section becomes the other section.
 図9(a)は図7(d)のグラフの閾値未満の点に丸印を付したものである。図9(b)は、図9(a)のうち、小振幅側を拡大して示している。また、図9(b)には、閾値を示す直線が付されている。 In FIG. 9A, points below the threshold of the graph of FIG. 7D are circled. FIG. 9B is an enlarged view of the small amplitude side in FIG. 9A. Further, in FIG. 9B, a straight line indicating the threshold value is attached.
 区間特定部130は新たに取得された音データに対する区間特定を、たとえば時間T毎に行う。Tは特に限定されないが、たとえば1秒以上10秒以下である。なお、Tは上記したT以下であることが好ましい。 The section specifying unit 130 specifies a section for the newly acquired sound data, for example, every time T 4 . T 4 is not particularly limited, but is, for example, 1 second or more and 10 seconds or less. Note that T 4 is preferably T 2 or less as described above.
 なお、区間特定部130は区間特定の度に閾値を決定しても良い。また、算出部150が音量情報の算出を行う度に、区間特定部130は区間特定を行っても良い。たとえば、同一の時間範囲の音データを処理対象として、閾値の決定、区間の特定、および音量情報の算出が行われてもよい。 Note that the section identifying unit 130 may determine the threshold each time the section is identified. In addition, the section identifying unit 130 may identify the section each time the calculating unit 150 calculates the volume information. For example, the sound data in the same time range may be processed, and the threshold value may be determined, the section may be specified, and the volume information may be calculated.
 また、区間特定部130は他の方法で区間を特定しても良い。たとえば、予め定められた閾値を用いて同様に区間の判定を行っても良い。また、区間特定部130は対象の人物の胸部等に取り付けられたバンドセンサの出力に基づいて区間を特定しても良い。たとえばバンドセンサは、呼吸時の胸のふくらみや動きを検出することができる。 Also, the section specifying unit 130 may specify the section by another method. For example, a section may be similarly determined using a predetermined threshold value. In addition, the section specifying unit 130 may specify the section based on the output of the band sensor attached to the chest or the like of the target person. For example, the band sensor can detect a bulge and movement of the chest during breathing.
 処理装置10の各機能構成部は、各機能構成部を実現するハードウエア(例:ハードワイヤードされた電子回路など)で実現されてもよいし、ハードウエアとソフトウエアとの組み合わせ(例:電子回路とそれを制御するプログラムの組み合わせなど)で実現されてもよい。以下、処理装置10の各機能構成部がハードウエアとソフトウエアとの組み合わせで実現される場合について、さらに説明する。 Each functional configuration unit of the processing device 10 may be implemented by hardware that implements each functional configuration unit (eg, hard-wired electronic circuit, etc.), or a combination of hardware and software (eg, electronic Combination of a circuit and a program for controlling the circuit). Hereinafter, a case where each functional component of the processing device 10 is realized by a combination of hardware and software will be further described.
 図10は、処理装置10を実現するための計算機1000を例示する図である。計算機1000は任意の計算機である。例えば計算機1000は、SoC(System On Chip)、Personal Computer(PC)、サーバマシン、タブレット端末、又はスマートフォンなどである。計算機1000は、処理装置10を実現するために設計された専用の計算機であってもよいし、汎用の計算機であってもよい。 FIG. 10 is a diagram exemplifying a computer 1000 for realizing the processing device 10. The computer 1000 is an arbitrary computer. For example, the computer 1000 is an SoC (System On Chip), a Personal Computer (PC), a server machine, a tablet terminal, a smartphone, or the like. The computer 1000 may be a dedicated computer designed to realize the processing device 10 or a general-purpose computer.
 計算機1000は、バス1020、プロセッサ1040、メモリ1060、ストレージデバイス1080、入出力インタフェース1100、及びネットワークインタフェース1120を有する。バス1020は、プロセッサ1040、メモリ1060、ストレージデバイス1080、入出力インタフェース1100、及びネットワークインタフェース1120が、相互にデータを送受信するためのデータ伝送路である。ただし、プロセッサ1040などを互いに接続する方法は、バス接続に限定されない。プロセッサ1040は、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、又は FPGA(Field-Programmable Gate Array)などの種々のプロセッサである。メモリ1060は、RAM(Random Access Memory)などを用いて実現される主記憶装置である。ストレージデバイス1080は、ハードディスク、SSD(Solid State Drive)、メモリカード、又は ROM(Read Only Memory)などを用いて実現される補助記憶装置である。 The computer 1000 has a bus 1020, a processor 1040, a memory 1060, a storage device 1080, an input / output interface 1100, and a network interface 1120. The bus 1020 is a data transmission path for the processor 1040, the memory 1060, the storage device 1080, the input / output interface 1100, and the network interface 1120 to exchange data with each other. However, the method of connecting the processors 1040 and the like to each other is not limited to bus connection. The processor 1040 is various processors such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an FPGA (Field-Programmable Gate Array). The memory 1060 is a main storage device realized by using a RAM (Random Access Memory) or the like. The storage device 1080 is an auxiliary storage device realized by using a hard disk, SSD (Solid State Drive), memory card, ROM (Read Only Memory), or the like.
 入出力インタフェース1100は、計算機1000と入出力デバイスとを接続するためのインタフェースである。例えば入出力インタフェース1100には、キーボードやマウスなどの入力装置や、ディスプレイ装置などの出力装置が接続される。また、入出力インタフェース1100には、ディスプレイ装置と入力装置を兼ねたタッチパネル等が接続されてもよい。 The input / output interface 1100 is an interface for connecting the computer 1000 and input / output devices. For example, the input / output interface 1100 is connected with an input device such as a keyboard and a mouse and an output device such as a display device. Further, the input / output interface 1100 may be connected with a touch panel or the like that doubles as a display device and an input device.
 ネットワークインタフェース1120は、計算機1000をネットワークに接続するためのインタフェースである。この通信網は、例えば LAN(Local Area Network)や WAN(Wide Area Network)である。ネットワークインタフェース1120がネットワークに接続する方法は、無線接続であってもよいし、有線接続であってもよい。 The network interface 1120 is an interface for connecting the computer 1000 to the network. This communication network is, for example, LAN (Local Area Network) or WAN (Wide Area Network). The method by which the network interface 1120 connects to the network may be a wireless connection or a wired connection.
 ストレージデバイス1080は、処理装置10の各機能構成部を実現するプログラムモジュールを記憶している。プロセッサ1040は、これら各プログラムモジュールをメモリ1060に読み出して実行することで、各プログラムモジュールに対応する機能を実現する。 The storage device 1080 stores a program module that realizes each functional component of the processing device 10. The processor 1040 realizes the function corresponding to each program module by reading each of these program modules into the memory 1060 and executing them.
 センサ210はたとえば、計算機1000の入出力インタフェース1100に、またはネットワークを通じてネットワークインタフェース1120に、接続される。 The sensor 210 is connected to, for example, the input / output interface 1100 of the computer 1000 or the network interface 1120 via a network.
 以上、本実施形態によれば、第1の実施形態と同様、算出部150は、対象の音データのうち第1区間に基づいて定められる第1部分と対象の音データのうち第2区間に基づいて定められる第2部分とを用いて、対象の音データの呼吸音量を示す音量情報を算出する。したがって、生体音のデータから、人の聴覚で感じる音量と近い呼吸音量を算出することができる。 As described above, according to the present embodiment, as in the first embodiment, the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
(第3の実施形態)
 図11は、第3の実施形態に係る処理装置10およびシステム20の構成を例示する図である。本実施形態に係る処理装置10およびシステム20は、以下に説明する点を除いて第2の実施形態に係る処理装置10およびシステム20とそれぞれ同様である。
(Third Embodiment)
FIG. 11 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the third embodiment. The processing apparatus 10 and the system 20 according to this embodiment are the same as the processing apparatus 10 and the system 20 according to the second embodiment, respectively, except for the points described below.
 本実施形態に係るシステム20は、複数のセンサ210を備え、本実施形態に係る取得部110は、複数のセンサ210で検出された音を示す複数の音データを取得する。複数のセンサ210はたとえば同一人物の生体の複数の部位において生体音を検出する。そして取得部110は、同一人物の生体の複数の部位において同時に検出された生体音の音データを取得することができる。また、取得部110は、少なくとも一部の録音時刻が互いに重なる複数の音データを取得することができる。なお、取得部110は複数の人物の生体音の音データをさらに取得しても良いが、少なくとも区間特定部130および算出部150による処理は人物毎に行われる。また、取得部110は、互いに録音時刻が重ならない複数の音データを取得してもよいが、少なくとも区間特定部130および算出部150による処理は、一の音データ毎または、少なくとも一部の録音時刻が互いに重なる複数の音データ毎に行われる。 The system 20 according to the present embodiment includes a plurality of sensors 210, and the acquisition unit 110 according to the present embodiment acquires a plurality of sound data indicating sounds detected by the plurality of sensors 210. The plurality of sensors 210 detect body sounds in a plurality of parts of the living body of the same person, for example. Then, the acquisition unit 110 can acquire the sound data of the body sounds simultaneously detected in a plurality of parts of the living body of the same person. Further, the acquisition unit 110 can acquire a plurality of sound data in which at least some recording times overlap each other. The acquisition unit 110 may further acquire sound data of body sounds of a plurality of persons, but at least the processing by the section identification unit 130 and the calculation unit 150 is performed for each person. Further, the acquisition unit 110 may acquire a plurality of sound data whose recording times do not overlap each other, but at least the processing by the section identification unit 130 and the calculation unit 150 is performed for each sound data or at least a part of the recorded sound data. It is performed for each of a plurality of pieces of sound data whose times overlap with each other.
 図12は、複数のセンサ210の取り付け位置を例示する図である。本図の例において、センサ210は部位Aから部位Dに取り付けられる。たとえば処理装置10は、音データの取得に先立ち、ユーザによるセンサ210の取り付け部位を示す情報の入力を受け付ける。具体的には、生体を示す図が、センサ210の取り付け部位の候補と共に表示装置に表示され、ユーザは候補のうち各センサ210を取り付ける位置をマウス、キーボードまたはタッチパネル等の入力装置を用いて指定する。各センサ210で取得される音データには、部位を示す情報が関連づけられる。 FIG. 12 is a diagram illustrating the attachment positions of the plurality of sensors 210. In the example of this figure, the sensor 210 is attached from the part A to the part D. For example, the processing device 10 receives the input of information indicating the attachment site of the sensor 210 by the user prior to the acquisition of the sound data. Specifically, a diagram showing the living body is displayed on the display device together with the candidates for the attachment site of the sensor 210, and the user specifies the attachment position of each sensor 210 among the candidates using an input device such as a mouse, a keyboard, or a touch panel. To do. The sound data acquired by each sensor 210 is associated with information indicating a part.
 本実施形態において、区間特定部130は、第2の実施形態において説明した様に、第1の音データに基づき第1の区間および第2の区間の少なくとも一方を特定する。すなわち、取得部110で取得された複数の音データには対象の音データおよび第1の音データが含まれる。ここで、対象の音データは一つに限られない。そして以下では、対象の音データが少なくとも第1の音データとは異なる第2の音データを含む例について説明する。すなわち、第1の音データで特定された区間に基づいて、第2の音データの音量情報が算出される。なお、第2の音データは複数存在しても良い。 In this embodiment, the section identifying unit 130 identifies at least one of the first section and the second section based on the first sound data, as described in the second embodiment. That is, the target sound data and the first sound data are included in the plurality of sound data acquired by the acquisition unit 110. Here, the target sound data is not limited to one. Then, an example in which the target sound data includes at least second sound data different from the first sound data will be described below. That is, the volume information of the second sound data is calculated based on the section specified by the first sound data. There may be a plurality of second sound data.
 たとえば第1の音データは、人体の表面または内部の、第1の位置に設けられた第1のセンサ210によって検出された音を示す。そして、第2の音データは、人体の表面または内部の、第2の位置に設けられた第2のセンサ210によって検出された音を示す。ここで、第1の位置が頸部に位置するか、第1の位置が第2の位置よりも頸部に近いことが好ましい。そうすれば、区間の特定をより正確に行うことができる。たとえば図12の例において、部位Aで得られる音データが第1の音データであることが好ましい。たとえば区間特定部130は、各音データに関連づけられた部位を示す情報に基づいて、取得部110で取得された複数の音データのうち、いずれを第1の音データとするかを選択する。 For example, the first sound data indicates the sound detected by the first sensor 210 provided at the first position on the surface of or inside the human body. Then, the second sound data indicates the sound detected by the second sensor 210 provided at the second position on the surface of or inside the human body. Here, it is preferable that the first position is located on the neck, or that the first position is closer to the neck than the second position. Then, the section can be specified more accurately. For example, in the example of FIG. 12, it is preferable that the sound data obtained at the part A is the first sound data. For example, the section identifying unit 130 selects which of the plurality of sound data acquired by the acquisition unit 110 is to be the first sound data based on the information indicating the part associated with each sound data.
 本実施形態に係る取得ステップS110およびステップS120は、それぞれ取得部110およびフィルタ処理部120により、第2の実施形態と同様に行われる。 The acquisition step S110 and step S120 according to the present embodiment are performed by the acquisition unit 110 and the filter processing unit 120, respectively, similarly to the second embodiment.
 そして、本実施形態に係る区間特定部130は、区間特定ステップS130において、第2の実施形態と同様、第1の音データにおける第1区間と第2区間との少なくとも一方を特定する。そして、第1の音データに基づいて第1区間の時刻範囲を示す第1時刻情報と第2区間の時刻範囲を示す第2時刻情報との少なくとも一方を生成する。 Then, in the section specifying step S130, the section specifying unit 130 according to the present embodiment specifies at least one of the first section and the second section in the first sound data, as in the second embodiment. Then, based on the first sound data, at least one of the first time information indicating the time range of the first section and the second time information indicating the time range of the second section is generated.
 また、本実施形態に係る算出部150は、算出ステップS150において、第2の実施形態と同様、第1時刻情報および第2時刻情報の少なくとも一方を用いて、各対象の音データの第1部分と第2部分とを特定する。そうすることで、対象の音データに含まれる第2の音データにおいても、呼吸が行われていると推定される第1区間とそれ以外の第2区間とを特定できる。 Further, in the calculation step S150, the calculation unit 150 according to the present embodiment uses at least one of the first time information and the second time information in the calculation step S150, and the first part of the sound data of each target. And the second part are specified. By doing so, also in the second sound data included in the target sound data, it is possible to specify the first section in which it is estimated that breathing is performed and the second section other than that.
 さらに算出部150は、各対象の音データについて、第2の実施形態と同様、音量情報を算出する。そうすることで、部位毎の音量情報を得ることができる。なお、算出部150は、取得部110で取得された全ての音データを対象の音データとしても良いし、ユーザにより指定された部位に対応する音データのみを対象の音データとしても良い。 Further, the calculation unit 150 calculates volume information for each target sound data, as in the second embodiment. By doing so, volume information for each part can be obtained. Note that the calculation unit 150 may set all the sound data acquired by the acquisition unit 110 as the target sound data, or may set only the sound data corresponding to the part designated by the user as the target sound data.
 図13は、複数の部位の音量情報の表示例を示す図である。本図の例において、複数の部位の音量情報はそれぞれ部位との対応が分かる状態で表示される。具体的には、部位を示すマップにおいて音量情報に基づき、音量を示す数値が表示される。加えて、音量を示す数値が時系列にグラフに示される。なお、本図のグラフにおいて、横軸はそれぞれ基準時刻からの経過時間を表示しており、複数の部位間で同期している。なお、グラフの軸のスケールは、ユーザが必要に応じて拡大または縮小したり、平行移動させたりできてよい。 FIG. 13 is a diagram showing a display example of volume information of a plurality of parts. In the example of this figure, the volume information of a plurality of parts is displayed in a state in which the correspondence with each part can be understood. Specifically, a numerical value indicating the volume is displayed based on the volume information in the map indicating the region. In addition, the numerical value indicating the volume is displayed in a graph in time series. In the graph of this figure, the horizontal axis represents the elapsed time from the reference time, respectively, and is synchronized among a plurality of parts. It should be noted that the scale of the axes of the graph may be enlarged or reduced by the user or may be moved in parallel as necessary.
 なお、対象の音データは、第1の音データをさらに含んでも良い。また、本実施形態では対象の音データが少なくとも第1の音データとは異なる第2の音データを含む例について説明したが、対象の音データは、第1の音データのみを含んでも良い。第1の音データの音量情報も上記と同様に算出可能である。 Note that the target sound data may further include the first sound data. Further, although the example in which the target sound data includes at least the second sound data different from the first sound data has been described in the present embodiment, the target sound data may include only the first sound data. The volume information of the first sound data can be calculated in the same manner as above.
 以上、本実施形態によれば、第1の実施形態と同様、算出部150は、対象の音データのうち第1区間に基づいて定められる第1部分と対象の音データのうち第2区間に基づいて定められる第2部分とを用いて、対象の音データの呼吸音量を示す音量情報を算出する。したがって、生体音のデータから、人の聴覚で感じる音量と近い呼吸音量を算出することができる。 As described above, according to the present embodiment, as in the first embodiment, the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
(第4の実施形態)
 第4の実施形態に係る処理装置10およびシステム20は、区間特定部130および算出部150の処理内容を除いて第3の実施形態に係る処理装置10およびシステム20とそれぞれ同様である。
(Fourth Embodiment)
The processing device 10 and the system 20 according to the fourth embodiment are the same as the processing device 10 and the system 20 according to the third embodiment, except for the processing contents of the section identifying unit 130 and the calculation unit 150, respectively.
 本実施形態に係る取得部110は、第3の実施形態に係る取得部110と同様、複数のセンサ210で検出された音を示す複数の音データを取得する。そして、区間特定部130は、取得された複数の音データのうち二以上の音データのそれぞれにおいて第1区間と第2区間とを特定する。すなわち本実施形態において、区間特定部130は、取得部110で取得された複数の音データのうち二以上の音データを、第1の音データとして用いる。二以上の第1の音データを用いて区間を特定することにより、区間特定の正確さを高めることができる。 Like the acquisition unit 110 according to the third embodiment, the acquisition unit 110 according to the present embodiment acquires a plurality of sound data indicating sounds detected by a plurality of sensors 210. Then, the section identifying unit 130 identifies the first section and the second section in each of the two or more pieces of sound data among the plurality of pieces of acquired sound data. That is, in the present embodiment, the section identifying unit 130 uses two or more sound data sets among the plurality of sound data sets acquired by the acquisition unit 110 as the first sound data set. By specifying the section using the two or more first sound data, the accuracy of specifying the section can be improved.
 なお、第2の実施形態に係る区間特定ステップS130と同様に閾値が第1の音データに基づいて決定される場合、区間特定部130は、区間を特定する第1の音データ毎に閾値を決定する。 When the threshold is determined based on the first sound data as in the section specifying step S130 according to the second embodiment, the section specifying unit 130 sets the threshold for each of the first sound data that specifies the section. decide.
 区間特定部130は、区間を特定した全ての第1の音データにおいて第1区間とされた時刻範囲、または、区間を特定した全ての第1の音データにおいて第2区間とされた時刻範囲を示す第3時刻情報を生成する。 The section identifying unit 130 sets the time range that is the first section in all the first sound data that specifies the section, or the time range that is the second section in all the first sound data that specifies the section. The 3rd time information shown is generated.
 ここで、二以上の第1の音データには、頸部で検出された音データまたは、複数の音データのうち最も頸部に近い位置で検出された音データが含まれることが好ましい。そうすれば、区間の特定をより正確に行うことができる。 Here, it is preferable that the two or more first sound data include sound data detected in the neck or sound data detected at a position closest to the neck among a plurality of sound data. Then, the section can be specified more accurately.
 そして算出部150は、第3時刻情報を用いて各対象の音データの第1部分と第2部分とを特定する。具体的には、第3時刻情報が全ての第1の音データにおいて第1区間とされた時刻範囲を示す情報である場合、算出部150は各対象の音データのうち第3時刻情報が示す時刻範囲の部分を第1部分とし、それ以外の部分を第2部分とする。一方、第3時刻情報が全ての第1の音データにおいて第2区間とされた時刻範囲を示す情報である場合、算出部150は各対象の音データのうち第3時刻情報が示す時刻範囲の部分を第2部分とし、それ以外の部分を第1部分とする。 Then, the calculation unit 150 identifies the first part and the second part of the sound data of each target using the third time information. Specifically, when the third time information is information indicating the time range defined as the first section in all the first sound data, the calculation unit 150 indicates the third time information in the target sound data. The portion of the time range is the first portion, and the other portions are the second portions. On the other hand, when the third time information is the information indicating the time range defined as the second section in all the first sound data, the calculation unit 150 sets the time range indicated by the third time information in the target sound data. The portion is the second portion, and the other portions are the first portions.
 対象の音データには第1の音データが含まれても良いし含まれなくても良い。また、対象の音データには、第1の音データとは異なる第2の音データが含まれても良いし、含まれなくても良い。 The target sound data may or may not include the first sound data. Further, the target sound data may or may not include second sound data different from the first sound data.
 さらに算出部150は、各対象の音データについて、第2の実施形態と同様、音量情報を算出する。そうすることで、部位毎の音量情報を得ることができる。 Further, the calculation unit 150 calculates volume information for each target sound data, as in the second embodiment. By doing so, volume information for each part can be obtained.
 以上、本実施形態によれば、第1の実施形態と同様、算出部150は、対象の音データのうち第1区間に基づいて定められる第1部分と対象の音データのうち第2区間に基づいて定められる第2部分とを用いて、対象の音データの呼吸音量を示す音量情報を算出する。したがって、生体音のデータから、人の聴覚で感じる音量と近い呼吸音量を算出することができる。 As described above, according to the present embodiment, as in the first embodiment, the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
 くわえて、本実施形態によれば、区間特定部130は、二以上の音データのそれぞれにおいて第1区間と第2区間とを特定する。したがって、区間特定の正確さを高めることができる。 In addition, according to this embodiment, the section identifying unit 130 identifies the first section and the second section in each of the two or more sound data. Therefore, the accuracy of section identification can be improved.
(第5の実施形態)
 図14は、第5の実施形態に係る処理装置10およびシステム20の構成を例示する図である。本実施形態に係る処理装置10およびシステム20は、以下に説明する点を除いて第2から第4の実施形態の少なくともいずれかと同様である。
(Fifth Embodiment)
FIG. 14 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the fifth embodiment. The processing apparatus 10 and the system 20 according to this embodiment are the same as at least one of the second to fourth embodiments except for the points described below.
 本実施形態において、処理装置10は推定部170をさらに備える。推定部170は算出部150で算出された音量情報に基づいて、生体音が検出された人物の状態を推定する。以下に詳しく説明する。 In the present embodiment, the processing device 10 further includes an estimation unit 170. The estimation unit 170 estimates the state of the person in which the body sound is detected, based on the volume information calculated by the calculation unit 150. The details will be described below.
 たとえば算出部150は、第2から第4の実施形態の少なくともいずれかと同様にして複数の対象の音データの音量情報を算出する。 For example, the calculation unit 150 calculates volume information of a plurality of target sound data in the same manner as at least one of the second to fourth embodiments.
 図15および図16はそれぞれ、複数の部位の音量情報の表示例を示す図である。 15 and 16 are diagrams showing display examples of volume information of a plurality of parts, respectively.
 算出部150が音量情報を算出すると、推定部170は算出部150から算出された音量情報を取得する。各音量情報には、部位を示す情報が関連づけられている。推定部170はたとえば、各部位の音量情報が示す音量の、時間あたりの低下率を算出する。そして、低下率が予め定められた基準値を上回る場合、呼吸が弱くなっていると推定する。その場合推定部170は、たとえば図15に示すように呼吸が弱くなっていることを示す表示または通知を行う。なお、推定部170は、予め定められた数以上の部位において音量の低下率が高くなった場合に、呼吸が弱くなっていると推定しても良い。また、推定部170は、予め定められた長さにわたり音量の低下率が高くなった場合に、呼吸が弱くなっていると推定しても良い。 When the calculation unit 150 calculates the volume information, the estimation unit 170 acquires the calculated volume information from the calculation unit 150. Information indicating a part is associated with each volume information. The estimation unit 170 calculates, for example, the rate of decrease in the volume indicated by the volume information of each part. Then, if the rate of decrease exceeds a predetermined reference value, it is estimated that breathing is weakened. In that case, the estimation unit 170 displays or notifies that the breathing is weakened, as shown in FIG. 15, for example. Note that the estimation unit 170 may estimate that the breathing is weakened when the volume decrease rate becomes high in a predetermined number or more of sites. In addition, the estimation unit 170 may estimate that the breathing is weakened when the decrease rate of the sound volume becomes high over a predetermined length.
 また、推定部170はたとえば、生体のうち互いに対称的な位置において検出された生体音を示す二つの音データの差を算出する。そして、推定部170は、差の大きさが予め定められた基準値を上回る場合、気胸の疑いがあると推定する。また、差が正であるか負であるかに基づいて、左右どちらの肺に気胸の疑いがあるかを推定する。このように推定部170は算出された複数の音量情報に基づいて異常呼吸音の音源の位置を推定してもよい。その上で推定部170は、たとえば図16に示すように気胸の疑いがある旨および、推定される位置を示す表示または通知を行う。なお、推定部170は、予め定められた長さにわたり差の大きさが基準値を上回る場合に、呼吸が弱くなっていると推定しても良い。 Further, the estimation unit 170 calculates, for example, a difference between two sound data indicating body sounds detected at positions symmetrical to each other in the living body. Then, the estimation unit 170 estimates that there is a suspicion of pneumothorax when the magnitude of the difference exceeds a predetermined reference value. Also, based on whether the difference is positive or negative, which lung is suspected to be pneumothorax is estimated. In this way, the estimation unit 170 may estimate the position of the sound source of the abnormal breath sound based on the calculated plurality of volume information. Then, the estimation unit 170 displays or notifies that there is a suspicion of pneumothorax and the estimated position, as shown in FIG. 16, for example. Note that the estimation unit 170 may estimate that the breathing is weakened when the magnitude of the difference exceeds the reference value over the predetermined length.
 本実施形態に係る処理装置10も、図10に示すような計算機1000を用いて実現可能である。 The processing device 10 according to the present embodiment can also be realized by using the computer 1000 as shown in FIG.
 以上、本実施形態によれば、第1の実施形態と同様、算出部150は、対象の音データのうち第1区間に基づいて定められる第1部分と対象の音データのうち第2区間に基づいて定められる第2部分とを用いて、対象の音データの呼吸音量を示す音量情報を算出する。したがって、生体音のデータから、人の聴覚で感じる音量と近い呼吸音量を算出することができる。 As described above, according to the present embodiment, as in the first embodiment, the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
 くわえて、本実施形態によれば、処理装置10は算出部150で算出された音量情報に基づいて、生体音が検出された人物の状態を推定する推定部170を備える。したがって、患者等の様態をモニタできる。 In addition, according to the present embodiment, the processing device 10 includes the estimation unit 170 that estimates the state of the person in which the body sound is detected, based on the volume information calculated by the calculation unit 150. Therefore, the condition of the patient or the like can be monitored.
 以下、本実施形態を、実施例を参照して詳細に説明する。なお、本実施形態は、これらの実施例の記載に何ら限定されるものではない。 Hereinafter, the present embodiment will be described in detail with reference to examples. The present embodiment is not limited to the description of these examples.
 22名の被験者で4部位(頸部、胸部右上、胸部左上、および胸部右下)の生体音を測定して音データを得た。そして、得られた音データを再生し、呼吸音が聞こえるか否かの聴感評価を行った。聴感評価においては、呼吸音が全く聞こえない場合を「0」、かろうじて聞こえる場合を「1」、良く聞こえる場合を「2」とした。 Sound data was obtained by measuring body sounds of 4 subjects (neck, upper right chest, upper left chest, and lower right chest) with 22 subjects. Then, the obtained sound data was reproduced, and the audibility evaluation whether or not the breathing sound was heard was performed. In the auditory perception, "0" was given when no breathing sound was heard, "1" was barely heard, and "2" was heard well.
 一方、実施例と比較例のそれぞれの方法で音データを処理し、音量を示す値を算出した。そして、算出された値と、聴感評価の結果を比較した。 On the other hand, the sound data was processed by each method of the example and the comparative example, and the value indicating the sound volume was calculated. Then, the calculated value was compared with the result of hearing evaluation.
 比較例では、音データに対してカットオフ周波数を400Hzとするハイパスフィルタ処理を行った。その上で、RMSを算出し、RMSをデシベル表示した値を、音量を示す値とした。なお、0dB=1digitとした。 In the comparative example, sound data was subjected to high-pass filter processing with a cutoff frequency of 400 Hz. Then, the RMS was calculated, and the value in which the RMS was displayed in decibels was taken as the value indicating the volume. Note that 0 dB = 1 digit.
 実施例では、第2の実施形態で説明した通り音量を示す値を算出した。具体的には、各音データに基づいて閾値を定め、区間を特定した。また、fL1を100Hzとし、fH1を1000Hzとした第1のフィルタ処理を施した音データの、各区間のRMSを算出した。そして、第1区間のRMSを第2区間のRMSで除した値をデシベル表示した値を、音量を示す値とした。すなわち、第2区間のRMSを0dBとした。なお、閾値の決定、区間の特定および音量を示す値の算出は、音データ毎に独立して行った。 In the example, the value indicating the volume was calculated as described in the second embodiment. Specifically, a threshold value was set based on each sound data, and the section was specified. Further, the RMS of each section of the sound data subjected to the first filter processing in which f L1 was 100 Hz and f H1 was 1000 Hz was calculated. Then, a value obtained by decibel-displaying the value obtained by dividing the RMS in the first section by the RMS in the second section was used as the value indicating the volume. That is, the RMS in the second section was set to 0 dB. Note that the determination of the threshold value, the specification of the section, and the calculation of the value indicating the sound volume were performed independently for each sound data.
 図17および図18はそれぞれ、比較例および実施例で算出された音量を示す値と、聴覚評価の結果との関係を示す箱ひげ図である。また、図19は、聴覚評価の結果と比較例で算出された音量を示す値との関係を、ヒストグラムで示した図である。図20は、聴覚評価の結果と実施例で算出された音量を示す値との関係を、ヒストグラムで示した図である。図17および図19に示すように、比較例では、評価1の音データと評価0の音データの違いが音量の違いとして適切に現れなかった。一方、図18および図20に示すように、実施例では、音量を示す値の大小が、評価0、評価1、および評価2と良く相関しており、評価0の音データと評価1の音データも、音量を示す値で明確に識別できた。このように、実施例の方法で、聴覚評価の結果と相関が高い値を音量として算出できた。したがって、実施例の方法で、音データから、人の聴覚で感じる音量と近い呼吸音量を算出することができることが確認された。 17 and 18 are box-and-whisker diagrams showing the relationship between the value indicating the volume calculated in the comparative example and the example and the result of the hearing evaluation. Further, FIG. 19 is a diagram showing a histogram of the relationship between the result of the hearing evaluation and the value indicating the volume calculated in the comparative example. FIG. 20 is a histogram showing the relationship between the result of the hearing evaluation and the value indicating the volume calculated in the embodiment. As shown in FIGS. 17 and 19, in the comparative example, the difference between the sound data of evaluation 1 and the sound data of evaluation 0 did not properly appear as a difference in sound volume. On the other hand, as shown in FIGS. 18 and 20, in the embodiment, the magnitude of the value indicating the volume is well correlated with the evaluation 0, the evaluation 1, and the evaluation 2, and the sound data of the evaluation 0 and the sound of the evaluation 1 are obtained. The data could be clearly identified by the value indicating the volume. Thus, with the method of the example, a value having a high correlation with the result of the auditory evaluation could be calculated as the volume. Therefore, it was confirmed that the breathing volume close to the volume perceived by human hearing can be calculated from the sound data by the method of the embodiment.
 以上、図面を参照して本発明の実施形態について述べたが、これらは本発明の例示であり、上記以外の様々な構成を採用することもできる。たとえば、上述の説明で用いたフローチャートでは、複数の工程(処理)が順番に記載されているが、各実施形態で実行される工程の実行順序は、その記載の順番に制限されない。各実施形態では、図示される工程の順番を内容的に支障のない範囲で変更することができる。また、上述の各実施形態は、内容が相反しない範囲で組み合わせることができる。 The embodiments of the present invention have been described above with reference to the drawings, but these are examples of the present invention, and various configurations other than the above may be adopted. For example, in the flowchart used in the above description, a plurality of processes (processes) are described in order, but the execution order of the processes executed in each embodiment is not limited to the described order. In each embodiment, the order of the illustrated steps can be changed within a range that does not hinder the contents. Further, the above-described respective embodiments can be combined within a range in which the contents do not conflict with each other.
 以下、参考形態の例を付記する。
1-1. 呼吸音を含む一以上の音データを取得する取得部と、
 呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を特定する区間特定部と、
 対象の前記音データのうち前記第1区間に基づいて定められる第1部分と前記対象の音データのうち前記第2区間に基づいて定められる第2部分とを用いて、前記対象の音データの、呼吸音量を示す音量情報を算出する算出部とを備える処理装置。
1-2. 1-1.に記載の処理装置において、
 前記算出部は、
  前記対象の音データの前記第1部分の強度である第1信号強度と、前記対象の音データの前記第2部分の強度である第2信号強度とを算出し、
  前記第1信号強度と前記第2信号強度とを用いて前記対象の音データの前記音量情報を算出する処理装置。
1-3. 1-2.に記載の処理装置において、
 前記算出部は、前記第1信号強度の前記第2信号強度に対する比を特定する情報を前記対象の音データの前記音量情報として算出する処理装置。
1-4. 1-1.から1-3.のいずれか一つに記載の処理装置において、
 少なくとも前記対象の音データに対しフィルタ処理を行うフィルタ処理部をさらに備え、
 前記算出部は、前記フィルタ処理された後の前記対象の音データの前記第1部分および前記第2部分を用いて前記音量情報を算出し、
 前記フィルタ処理部は、低周波数側のカットオフ周波数をfL1[Hz]とし、高周波数側のカットオフ周波数をfH1[Hz]とするバンドパスフィルタ処理を行い、
 50≦fL1≦150が成り立ち、
 500≦fH1≦1500が成り立つ処理装置。
1-5. 1-1.から1-4.のいずれか一つに記載の処理装置において、
 前記取得部は、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定部は、第1の前記音データに基づいて前記第1区間の時刻範囲を示す第1時刻情報と前記第2区間の時刻範囲を示す第2時刻情報との少なくとも一方を生成し、
 前記算出部は、前記第1時刻情報および前記第2時刻情報の少なくとも一方を用いて、前記対象の音データの前記第1部分と前記第2部分とを特定し、
 前記対象の音データは、少なくとも前記第1の音データとは異なる第2の前記音データを含む処理装置。
1-6. 1-5.に記載の処理装置において、
 前記第1の音データは、人体の表面または内部の、第1の位置に設けられた第1の前記センサによって検出された音を示し、
 前記第2の音データは、前記人体の表面または内部の、第2の位置に設けられた第2の前記センサによって検出された音を示し、
 前記第1の位置は頸部に位置する、または、前記第1の位置は前記第2の位置よりも頸部に近い処理装置。
1-7. 1-1.から1-4.のいずれか一つに記載の処理装置において、
 前記取得部は、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定部は、
  二以上の前記音データのそれぞれにおいて前記第1区間と前記第2区間とを特定し、
  前記二以上の音データの全てにおいて前記第1区間とされた時刻範囲、または前記二以上の音データの全てにおいて前記第2区間とされた時刻範囲を示す第3時刻情報を生成し、
 前記算出部は、前記第3時刻情報を用いて前記対象の音データの前記第1部分と前記第2部分とを特定する処理装置。
1-8. 1-5.から1-7.のいずれか一つに記載の処理装置において、
 前記算出部は、複数の前記対象の音データの前記音量情報を算出し、
 算出された前記複数の音量情報に基づいて異常呼吸音の音源の位置を推定する推定部をさらに備える処理装置。
1-9. 1-1.から1-8のいずれか一つに記載の処理装置と、
 センサとを備え、
 前記取得部は、前記センサで検出された音を示す前記音データを取得するシステム。
2-1. 呼吸音を含む一以上の音データを取得する取得ステップと、
 呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を特定する区間特定ステップと、
 対象の前記音データのうち前記第1区間に基づいて定められる第1部分と前記対象の音データのうち前記第2区間に基づいて定められる第2部分とを用いて、前記対象の音データの、呼吸音量を示す音量情報を算出する算出ステップとを含む処理方法。
2-2. 2-1.に記載の処理方法において、
 前記算出ステップでは、
  前記対象の音データの前記第1部分の強度である第1信号強度と、前記対象の音データの前記第2部分の強度である第2信号強度とを算出し、
  前記第1信号強度と前記第2信号強度とを用いて前記対象の音データの前記音量情報を算出する処理方法。
2-3. 2-2.に記載の処理方法において、
 前記算出ステップでは、前記第1信号強度の前記第2信号強度に対する比を特定する情報を前記対象の音データの前記音量情報として算出する処理方法。
2-4. 2-1.から2-3.のいずれか一つに記載の処理方法において、
 少なくとも前記対象の音データに対しフィルタ処理を行うフィルタ処理ステップをさらに含み、
 前記算出ステップでは、前記フィルタ処理された後の前記対象の音データの前記第1部分および前記第2部分を用いて前記音量情報を算出し、
 前記フィルタ処理ステップでは、低周波数側のカットオフ周波数をfL1[Hz]とし、高周波数側のカットオフ周波数をfH1[Hz]とするバンドパスフィルタ処理を行い、
 50≦fL1≦150が成り立ち、
 500≦fH1≦1500が成り立つ処理方法。
2-5. 2-1.から2-4.のいずれか一つに記載の処理方法において、
 前記取得ステップでは、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定ステップでは、第1の前記音データに基づいて前記第1区間の時刻範囲を示す第1時刻情報と前記第2区間の時刻範囲を示す第2時刻情報との少なくとも一方を生成し、
 前記算出ステップでは、前記第1時刻情報および前記第2時刻情報の少なくとも一方を用いて、前記対象の音データの前記第1部分と前記第2部分とを特定し、
 前記対象の音データは、少なくとも前記第1の音データとは異なる第2の前記音データを含む処理方法。
2-6. 2-5.に記載の処理方法において、
 前記第1の音データは、人体の表面または内部の、第1の位置に設けられた第1の前記センサによって検出された音を示し、
 前記第2の音データは、前記人体の表面または内部の、第2の位置に設けられた第2の前記センサによって検出された音を示し、
 前記第1の位置は頸部に位置する、または、前記第1の位置は前記第2の位置よりも頸部に近い処理方法。
2-7. 2-1.から2-4.のいずれか一つに記載の処理方法において、
 前記取得ステップでは、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定ステップでは、
  二以上の前記音データのそれぞれにおいて前記第1区間と前記第2区間とを特定し、
  前記二以上の音データの全てにおいて前記第1区間とされた時刻範囲、または前記二以上の音データの全てにおいて前記第2区間とされた時刻範囲を示す第3時刻情報を生成し、
 前記算出ステップでは、前記第3時刻情報を用いて前記対象の音データの前記第1部分と前記第2部分とを特定する処理方法。
2-8. 2-5.から2-7.のいずれか一つに記載の処理方法において、
 前記算出ステップでは、複数の前記対象の音データの前記音量情報を算出し、
 算出された前記複数の音量情報に基づいて異常呼吸音の音源の位置を推定する推定ステップをさらに含む処理方法。
3-1. 2-1.から2-8.のいずれか一つに記載の処理方法の各ステップをコンピュータに実行させるプログラム。
4-1. 呼吸音を含む音データを取得する取得部と、
 前記音データにおける呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を、前記音データの振幅を示す値についての閾値を用いて特定する区間特定部とを備え、
 前記振幅を示す値は前記音データの各時刻の振動の大きさを示す値であり、
 前記区間特定部は、前記閾値を前記音データに基づいて決定する処理装置。
4-2. 4-1.に記載の処理装置において、
 前記区間特定部は、
  前記音データにおいて前記振幅を示す値の最頻値を求め、
  前記最頻値よりも大きい前記閾値を定め、
  前記音データの内、少なくとも前記振幅を示す値が前記閾値を超える区間を、前記第1区間として特定する第1処理、および、少なくとも前記振幅を示す値が前記閾値未満である区間を、前記第2区間として特定する第2処理の少なくとも一方を行う処理装置。
4-3. 4-2.に記載の処理装置において、
 前記閾値は、前記音データを、横軸を前記振幅を示す値とし縦軸を出現回数としたグラフに表したとき、極小値をとる一以上の前記振幅を示す値のうち、前記最頻値に最も近い前記振幅を示す値である処理装置。
4-4. 4-3.に記載の処理装置において、
 前記音データを前記グラフに表したとき、前記最頻値は、極大値をとる複数の前記振幅を示す値のうち最も小さい前記振幅を示す値である処理装置。
4-5. 4-2.から4-4.のいずれか一つに記載の処理装置において、
 前記区間特定部は、フィルタ処理を行った後の前記音データに基づき、前記最頻値を求め、
 前記フィルタ処理は、低周波数側のカットオフ周波数をfL2[Hz]とし、高周波数側のカットオフ周波数をfH2[Hz]とするバンドパスフィルタ処理であり、
 150≦fL2≦250が成り立ち、
 550≦fH2≦650が成り立つ処理装置。
4-6. 4-2.から4-5.のいずれか一つに記載の処理装置において、
 前記区間特定部は、前記音データに少なくともダウンサンプリング処理を行って得たデータに基づき、前記最頻値を求める処理装置。
4-7. 4-1.から4-6.のいずれか一つに記載の処理装置において、
 前記取得部は、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定部は、
  前記複数の音データに含まれる第1の前記音データにおける前記第1区間と前記第2区間との少なくとも一方を特定し、
  前記第1区間の時刻範囲を示す第1時刻情報と前記第2区間の時刻範囲を示す第2時刻情報との少なくとも一方を生成し、
  前記複数の音データに含まれる前記音データであって、前記第1の音データとは異なる第2の前記音データの前記第1区間および前記第2区間を前記第1時刻情報と前記第2時刻情報との少なくとも一方に基づき特定する処理装置。
4-8. 4-7.に記載の処理装置において、
 前記第1の音データは、人体の表面または内部の、第1の位置に設けられた第1の前記センサによって検出された音を示し、
 前記第2の音データは、前記人体の表面または内部の、第2の位置に設けられた第2の前記センサによって検出された音を示し、
 前記第1の位置は頸部に位置する、または、前記第1の位置は前記第2の位置よりも頸部に近い処理装置。
4-9. 4-1.から4-6.のいずれか一つに記載の処理装置において、
 前記取得部は、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定部は、
  二以上の前記音データのそれぞれにおいて前記第1区間と前記第2区間とを特定し、
  前記二以上の音データの全てにおいて前記第1区間と特定された時刻範囲、または前記二以上の音データの全てにおいて前記第2区間と特定された時刻範囲を示す第3時刻情報を生成する処理装置。
4-10. 4-1.から4-9.のいずれか一つに記載の処理装置と、
 センサとを備え、
 前記取得部は、前記センサで検出された音を示す前記音データを取得するシステム。
5-1. 呼吸音を含む音データを取得する取得ステップと、
 前記音データにおける呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を、前記音データの振幅を示す値についての閾値を用いて特定する区間特定ステップとを含み、
 前記振幅を示す値は前記音データの各時刻の振動の大きさを示す値であり、
 前記区間特定ステップでは、前記閾値を前記音データに基づいて決定する処理方法。
5-2. 5-1.に記載の処理方法において、
 前記区間特定ステップでは、
  前記音データにおいて前記振幅を示す値の最頻値を求め、
  前記最頻値よりも大きい前記閾値を定め、
  前記音データの内、少なくとも前記振幅を示す値が前記閾値を超える区間を、前記第1区間として特定する第1処理、および、少なくとも前記振幅を示す値が前記閾値未満である区間を、前記第2区間として特定する第2処理の少なくとも一方を行う処理方法。
5-3. 5-2.に記載の処理方法において、
 前記閾値は、前記音データを、横軸を前記振幅を示す値とし縦軸を出現回数としたグラフに表したとき、極小値をとる一以上の前記振幅を示す値のうち、前記最頻値に最も近い前記振幅を示す値である処理方法。
5-4. 5-3.に記載の処理方法において、
 前記音データを前記グラフに表したとき、前記最頻値は、極大値をとる複数の前記振幅を示す値のうち最も小さい前記振幅を示す値である処理方法。
5-5. 5-2.から5-4.のいずれか一つに記載の処理方法において、
 前記区間特定ステップでは、フィルタ処理を行った後の前記音データに基づき、前記最頻値を求め、
 前記フィルタ処理は、低周波数側のカットオフ周波数をfL2[Hz]とし、高周波数側のカットオフ周波数をfH2[Hz]とするバンドパスフィルタ処理であり、
 150≦fL2≦250が成り立ち、
 550≦fH2≦650が成り立つ処理方法。
5-6. 5-2.から5-5.のいずれか一つに記載の処理方法において、
 前記区間特定ステップでは、前記音データに少なくともダウンサンプリング処理を行って得たデータに基づき、前記最頻値を求める処理方法。
5-7. 5-1.から5-6.のいずれか一つに記載の処理方法において、
 前記取得ステップでは、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定ステップでは、
  前記複数の音データに含まれる第1の前記音データにおける前記第1区間と前記第2区間との少なくとも一方を特定し、
  前記第1区間の時刻範囲を示す第1時刻情報と前記第2区間の時刻範囲を示す第2時刻情報との少なくとも一方を生成し、
  前記複数の音データに含まれる前記音データであって、前記第1の音データとは異なる第2の前記音データの前記第1区間および前記第2区間を前記第1時刻情報と前記第2時刻情報との少なくとも一方に基づき特定する処理方法。
5-8. 5-7.に記載の処理方法において、
 前記第1の音データは、人体の表面または内部の、第1の位置に設けられた第1の前記センサによって検出された音を示し、
 前記第2の音データは、前記人体の表面または内部の、第2の位置に設けられた第2の前記センサによって検出された音を示し、
 前記第1の位置は頸部に位置する、または、前記第1の位置は前記第2の位置よりも頸部に近い処理方法。
5-9. 5-1.から5-6.のいずれか一つに記載の処理方法において、
 前記取得ステップでは、複数のセンサで検出された音を示す複数の前記音データを取得し、
 前記区間特定ステップでは、
  二以上の前記音データのそれぞれにおいて前記第1区間と前記第2区間とを特定し、
  前記二以上の音データの全てにおいて前記第1区間と特定された時刻範囲、または前記二以上の音データの全てにおいて前記第2区間と特定された時刻範囲を示す第3時刻情報を生成する処理方法。
6-1. 5-1.から5-9.のいずれか一つに記載の処理方法の各ステップをコンピュータに実行させるプログラム。
Hereinafter, an example of the reference mode will be additionally described.
1-1. An acquisition unit that acquires one or more sound data including breath sounds,
A section specifying unit that specifies at least one of a first section presumed to be breathing and a second section between the plurality of first sections;
Using the first portion of the target sound data that is determined based on the first section and the second portion of the target sound data that is determined based on the second section, the target sound data And a calculation unit that calculates volume information indicating a respiratory volume.
1-2. 1-1. In the processing device described in
The calculation unit
Calculating a first signal strength that is the strength of the first portion of the target sound data and a second signal strength that is the strength of the second portion of the target sound data,
A processing device for calculating the volume information of the target sound data using the first signal strength and the second signal strength.
1-3. 1-2. In the processing device described in
The said calculation part is a processing apparatus which calculates the information which specifies the ratio of the said 1st signal strength to the said 2nd signal strength as the said volume information of the said sound data of object.
1-4. 1-1. From 1-3. In the processing device according to any one of
Further comprising a filter processing unit that performs a filter process on at least the target sound data,
The calculator calculates the volume information using the first portion and the second portion of the target sound data after the filtering,
The filter processing unit performs bandpass filter processing in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
50 ≦ f L1 ≦ 150 holds,
A processing device satisfying 500 ≦ f H1 ≦ 1500.
1-5. 1-1. To 1-4. In the processing device according to any one of
The acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors,
The section identifying unit generates at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section based on the first sound data,
The calculation unit uses at least one of the first time information and the second time information to specify the first portion and the second portion of the target sound data,
The processing device in which the target sound data includes at least second sound data different from the first sound data.
1-6. 1-5. In the processing device described in
The first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body,
The second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body,
The processing device wherein the first position is located on the neck, or the first position is closer to the neck than the second position.
1-7. 1-1. To 1-4. In the processing device according to any one of
The acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors,
The section specifying unit,
Specifying the first section and the second section in each of two or more of the sound data,
Generating third time information indicating a time range defined as the first section in all of the two or more sound data, or a time range defined as the second section in all of the two or more sound data,
The said calculation part is a processing apparatus which specifies the said 1st part and said 2nd part of the sound data of the said object using the said 3rd time information.
1-8. 1-5. To 1-7. In the processing device according to any one of
The calculation unit calculates the volume information of a plurality of target sound data,
The processing device further comprising an estimation unit that estimates the position of the sound source of the abnormal respiratory sound based on the calculated plurality of volume information.
1-9. 1-1. To the processing device according to any one of 1 to 8,
Equipped with a sensor,
The said acquisition part is a system which acquires the said sound data which show the sound detected by the said sensor.
2-1. An acquisition step of acquiring one or more sound data including breath sounds,
A section identifying step of identifying at least one of a first section presumed to be breathing and a second section among the plurality of first sections;
Using the first portion of the target sound data that is determined based on the first section and the second portion of the target sound data that is determined based on the second section, the target sound data And a calculation step of calculating volume information indicating the breath volume.
2-2. 2-1. In the processing method described in
In the calculation step,
Calculating a first signal strength that is the strength of the first portion of the target sound data and a second signal strength that is the strength of the second portion of the target sound data,
A processing method for calculating the volume information of the target sound data using the first signal strength and the second signal strength.
2-3. 2-2. In the processing method described in
In the calculating step, the processing method of calculating information specifying the ratio of the first signal strength to the second signal strength as the volume information of the target sound data.
2-4. 2-1. From 2-3. In the processing method described in any one of
Further comprising a filtering step for performing a filtering process on at least the target sound data,
In the calculating step, the volume information is calculated using the first portion and the second portion of the target sound data after the filtering process,
In the filtering step, bandpass filter processing is performed in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
50 ≦ f L1 ≦ 150 holds,
A processing method in which 500 ≦ f H1 ≦ 1500.
2-5. 2-1. From 2-4. In the processing method described in any one of
In the acquisition step, a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
In the section specifying step, at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section is generated based on the first sound data,
In the calculating step, at least one of the first time information and the second time information is used to identify the first portion and the second portion of the target sound data,
The processing method, wherein the target sound data includes at least second sound data different from the first sound data.
2-6. 2-5. In the processing method described in
The first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body,
The second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body,
The processing method wherein the first position is located on the neck, or the first position is closer to the neck than the second position.
2-7. 2-1. From 2-4. In the processing method described in any one of
In the acquisition step, a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
In the section specifying step,
Specifying the first section and the second section in each of two or more of the sound data,
Generating third time information indicating a time range defined as the first section in all of the two or more sound data, or a time range defined as the second section in all of the two or more sound data,
In the calculating step, a processing method of identifying the first portion and the second portion of the target sound data by using the third time information.
2-8. 2-5. To 2-7. In the processing method described in any one of
In the calculation step, the volume information of a plurality of target sound data is calculated,
The processing method further comprising an estimation step of estimating the position of the sound source of the abnormal respiratory sound based on the calculated plurality of volume information.
3-1. 2-1. To 2-8. A program that causes a computer to execute each step of the processing method described in any one of 1.
4-1. An acquisition unit that acquires sound data including breath sounds,
At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data. With a section specifying unit to specify,
The value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
The said area specific part is a processing apparatus which determines the said threshold value based on the said sound data.
4-2. 4-1. In the processing device described in
The section specifying unit,
In the sound data, find the mode of the value indicating the amplitude,
Defining the threshold value greater than the mode value,
Of the sound data, at least a section in which a value indicating the amplitude exceeds the threshold is specified as the first section, and at least a section in which the value indicating the amplitude is less than the threshold is A processing device that performs at least one of the second processes specified as two sections.
4-3. 4-2. In the processing device described in
The threshold value, when the sound data is represented in a graph in which the horizontal axis indicates the amplitude and the vertical axis indicates the number of appearances, among the values indicating one or more of the minimum values, the mode value A processing device that is a value indicating the amplitude that is closest to.
4-4. 4-3. In the processing device described in
When the sound data is represented in the graph, the mode value is a value indicating the smallest amplitude among a plurality of values indicating the amplitude having a maximum value.
4-5. 4-2. To 4-4. In the processing device according to any one of
The section identifying unit, based on the sound data after performing a filtering process, obtains the mode value,
The filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
150 ≦ f L2 ≦ 250 holds,
A processing device satisfying 550 ≦ f H2 ≦ 650.
4-6. 4-2. To 4-5. In the processing device according to any one of
The section specifying unit is a processing device that obtains the mode value based on data obtained by performing at least downsampling processing on the sound data.
4-7. 4-1. To 4-6. In the processing device according to any one of
The acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors,
The section specifying unit,
Specifying at least one of the first section and the second section in the first sound data included in the plurality of sound data,
Generating at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section,
Of the sound data included in the plurality of sound data, the first section and the second section of the second sound data different from the first sound data are set to the first time information and the second section. A processing device that identifies based on at least one of time information.
4-8. 4-7. In the processing device described in
The first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body,
The second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body,
The processing device wherein the first position is located on the neck, or the first position is closer to the neck than the second position.
4-9. 4-1. To 4-6. In the processing device according to any one of
The acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors,
The section specifying unit,
Specifying the first section and the second section in each of two or more of the sound data,
A process of generating third time information indicating a time range specified as the first section in all of the two or more sound data, or a time range specified as the second section in all of the two or more sound data. apparatus.
4-10. 4-1. To 4-9. A processing device according to any one of,
Equipped with a sensor,
The said acquisition part is a system which acquires the said sound data which show the sound detected by the said sensor.
5-1. An acquisition step of acquiring sound data including breath sounds,
At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data. Including a section specifying step to specify,
The value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
In the section identifying step, the processing method of determining the threshold value based on the sound data.
5-2. 5-1. In the processing method described in
In the section specifying step,
In the sound data, find the mode of the value indicating the amplitude,
Defining the threshold value greater than the mode value,
Of the sound data, at least a section in which a value indicating the amplitude exceeds the threshold is specified as the first section, and at least a section in which the value indicating the amplitude is less than the threshold is A processing method for performing at least one of the second processes specified as two sections.
5-3. 5-2. In the processing method described in
The threshold value, when the sound data is represented in a graph in which the horizontal axis indicates the amplitude and the vertical axis indicates the number of appearances, among the values indicating one or more of the minimum values, the mode value A processing method that is a value that is closest to the amplitude.
5-4. 5-3. In the processing method described in
When the sound data is represented in the graph, the mode value is a value indicating the smallest amplitude among a plurality of values indicating the amplitude having a maximum value.
5-5. 5-2. To 5-4. In the processing method described in any one of
In the section identifying step, the mode value is obtained based on the sound data after performing the filtering process,
The filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
150 ≦ f L2 ≦ 250 holds,
A processing method that satisfies 550 ≦ f H2 ≦ 650.
5-6. 5-2. To 5-5. In the processing method described in any one of
In the section identifying step, a processing method for obtaining the mode value based on data obtained by performing at least downsampling processing on the sound data.
5-7. 5-1. To 5-6. In the processing method described in any one of
In the acquisition step, a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
In the section specifying step,
Specifying at least one of the first section and the second section in the first sound data included in the plurality of sound data,
Generating at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section,
Of the sound data included in the plurality of sound data, the first section and the second section of the second sound data different from the first sound data are set to the first time information and the second section. A processing method of specifying based on at least one of time information.
5-8. 5-7. In the processing method described in
The first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body,
The second sound data indicates a sound detected by a second sensor provided at a second position on the surface of or inside the human body,
The said 1st position is located in a neck, or the said 1st position is a processing method closer to a neck than the said 2nd position.
5-9. 5-1. To 5-6. In the processing method described in any one of
In the acquisition step, a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
In the section specifying step,
Specifying the first section and the second section in each of two or more of the sound data,
A process of generating third time information indicating a time range specified as the first section in all of the two or more sound data, or a time range specified as the second section in all of the two or more sound data Method.
6-1. 5-1. To 5-9. A program that causes a computer to execute each step of the processing method described in any one of 1.
 この出願は、2018年10月31日に出願された日本出願特願2018-205536号を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims the priority right based on Japanese Patent Application No. 2018-205536 filed on October 31, 2018, and incorporates all the disclosure thereof.

Claims (12)

  1.  呼吸音を含む音データを取得する取得部と、
     前記音データにおける呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を、前記音データの振幅を示す値についての閾値を用いて特定する区間特定部とを備え、
     前記振幅を示す値は前記音データの各時刻の振動の大きさを示す値であり、
     前記区間特定部は、前記閾値を前記音データに基づいて決定する処理装置。
    An acquisition unit that acquires sound data including breath sounds,
    At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data. With a section specifying unit to specify,
    The value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
    The said area specific part is a processing apparatus which determines the said threshold value based on the said sound data.
  2.  請求項1に記載の処理装置において、
     前記区間特定部は、
      前記音データにおいて前記振幅を示す値の最頻値を求め、
      前記最頻値よりも大きい前記閾値を定め、
      前記音データの内、少なくとも前記振幅を示す値が前記閾値を超える区間を、前記第1区間として特定する第1処理、および、少なくとも前記振幅を示す値が前記閾値未満である区間を、前記第2区間として特定する第2処理の少なくとも一方を行う処理装置。
    The processing apparatus according to claim 1,
    The section specifying unit,
    In the sound data, find the mode of the value indicating the amplitude,
    Defining the threshold value greater than the mode value,
    Of the sound data, at least a section in which a value indicating the amplitude exceeds the threshold is specified as the first section, and at least a section in which the value indicating the amplitude is less than the threshold is A processing device that performs at least one of the second processes specified as two sections.
  3.  請求項2に記載の処理装置において、
     前記閾値は、前記音データを、横軸を前記振幅を示す値とし縦軸を出現回数としたグラフに表したとき、極小値をとる一以上の前記振幅を示す値のうち、前記最頻値に最も近い前記振幅を示す値である処理装置。
    The processing device according to claim 2,
    The threshold value, when the sound data is expressed in a graph in which the horizontal axis indicates the amplitude and the vertical axis indicates the number of appearances, the mode value among the values indicating one or more minimum values that indicate the amplitude is the mode value. A processing device that is a value indicating the amplitude that is closest to.
  4.  請求項3に記載の処理装置において、
     前記音データを前記グラフに表したとき、前記最頻値は、極大値をとる複数の前記振幅を示す値のうち最も小さい前記振幅を示す値である処理装置。
    The processing apparatus according to claim 3,
    When the sound data is represented in the graph, the mode value is a value indicating the smallest amplitude among a plurality of values indicating the amplitude having a maximum value.
  5.  請求項2から4のいずれか一項に記載の処理装置において、
     前記区間特定部は、フィルタ処理を行った後の前記音データに基づき、前記最頻値を求め、
     前記フィルタ処理は、低周波数側のカットオフ周波数をfL2[Hz]とし、高周波数側のカットオフ周波数をfH2[Hz]とするバンドパスフィルタ処理であり、
     150≦fL2≦250が成り立ち、
     550≦fH2≦650が成り立つ処理装置。
    The processing device according to any one of claims 2 to 4,
    The section identifying unit, based on the sound data after performing a filtering process, obtains the mode value,
    The filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
    150 ≦ f L2 ≦ 250 holds,
    A processing device satisfying 550 ≦ f H2 ≦ 650.
  6.  請求項2から5のいずれか一項に記載の処理装置において、
     前記区間特定部は、前記音データに少なくともダウンサンプリング処理を行って得たデータに基づき、前記最頻値を求める処理装置。
    The processing apparatus according to any one of claims 2 to 5,
    The section specifying unit is a processing device that obtains the mode value based on data obtained by performing at least downsampling processing on the sound data.
  7.  請求項1から6のいずれか一項に記載の処理装置において、
     前記取得部は、複数のセンサで検出された音を示す複数の前記音データを取得し、
     前記区間特定部は、
      前記複数の音データに含まれる第1の前記音データにおける前記第1区間と前記第2区間との少なくとも一方を特定し、
      前記第1区間の時刻範囲を示す第1時刻情報と前記第2区間の時刻範囲を示す第2時刻情報との少なくとも一方を生成し、
      前記複数の音データに含まれる前記音データであって、前記第1の音データとは異なる第2の前記音データの前記第1区間および前記第2区間を前記第1時刻情報と前記第2時刻情報との少なくとも一方に基づき特定する処理装置。
    The processing apparatus according to any one of claims 1 to 6,
    The acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors,
    The section specifying unit,
    Specifying at least one of the first section and the second section in the first sound data included in the plurality of sound data,
    Generating at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section,
    Of the sound data included in the plurality of sound data, the first section and the second section of the second sound data different from the first sound data are set to the first time information and the second section. A processing device that identifies based on at least one of time information.
  8.  請求項7に記載の処理装置において、
     前記第1の音データは、人体の表面または内部の、第1の位置に設けられた第1の前記センサによって検出された音を示し、
     前記第2の音データは、前記人体の表面または内部の、第2の位置に設けられた第2の前記センサによって検出された音を示し、
     前記第1の位置は頸部に位置する、または、前記第1の位置は前記第2の位置よりも頸部に近い処理装置。
    The processing apparatus according to claim 7,
    The first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body,
    The second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body,
    The processing device wherein the first position is located on the neck, or the first position is closer to the neck than the second position.
  9.  請求項1から6のいずれか一項に記載の処理装置において、
     前記取得部は、複数のセンサで検出された音を示す複数の前記音データを取得し、
     前記区間特定部は、
      二以上の前記音データのそれぞれにおいて前記第1区間と前記第2区間とを特定し、
      前記二以上の音データの全てにおいて前記第1区間と特定された時刻範囲、または前記二以上の音データの全てにおいて前記第2区間と特定された時刻範囲を示す第3時刻情報を生成する処理装置。
    The processing apparatus according to any one of claims 1 to 6,
    The acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors,
    The section specifying unit,
    Specifying the first section and the second section in each of two or more of the sound data,
    A process of generating third time information indicating a time range specified as the first section in all of the two or more sound data, or a time range specified as the second section in all of the two or more sound data. apparatus.
  10.  請求項1に記載の処理装置と、
     センサとを備え、
     前記取得部は、前記センサで検出された音を示す前記音データを取得するシステム。
    The processing device according to claim 1,
    Equipped with a sensor,
    The said acquisition part is a system which acquires the said sound data which show the sound detected by the said sensor.
  11.  呼吸音を含む音データを取得する取得ステップと、
     前記音データにおける呼吸が行われていると推定される第1区間と複数の前記第1区間の間の第2区間との少なくとも一方を、前記音データの振幅を示す値についての閾値を用いて特定する区間特定ステップとを含み、
     前記振幅を示す値は前記音データの各時刻の振動の大きさを示す値であり、
     前記区間特定ステップでは、前記閾値を前記音データに基づいて決定する処理方法。
    An acquisition step of acquiring sound data including breath sounds,
    At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data. Including a section specifying step to specify,
    The value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
    In the section identifying step, the processing method of determining the threshold value based on the sound data.
  12.  請求項11に記載の処理方法の各ステップをコンピュータに実行させるプログラム。 A program that causes a computer to execute each step of the processing method according to claim 11.
PCT/JP2019/042240 2018-10-31 2019-10-29 Processing device, system, processing method, and program WO2020090763A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020553908A JP7089650B2 (en) 2018-10-31 2019-10-29 Processing equipment, systems, processing methods, and programs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-205536 2018-10-31
JP2018205536 2018-10-31

Publications (1)

Publication Number Publication Date
WO2020090763A1 true WO2020090763A1 (en) 2020-05-07

Family

ID=70463771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/042240 WO2020090763A1 (en) 2018-10-31 2019-10-29 Processing device, system, processing method, and program

Country Status (2)

Country Link
JP (1) JP7089650B2 (en)
WO (1) WO2020090763A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113899446A (en) * 2021-12-09 2022-01-07 北京京仪自动化装备技术股份有限公司 Wafer transmission system detection method and wafer transmission system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012060107A1 (en) * 2010-11-04 2012-05-10 パナソニック株式会社 Biometric sound testing device and biometric sound testing method
JP2013106906A (en) * 2011-11-24 2013-06-06 Omron Healthcare Co Ltd Sleep evaluation apparatus
JP2013202101A (en) * 2012-03-27 2013-10-07 Fujitsu Ltd Apneic state decision device, apneic state decision method, and apneic state decision program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012060107A1 (en) * 2010-11-04 2012-05-10 パナソニック株式会社 Biometric sound testing device and biometric sound testing method
JP2013106906A (en) * 2011-11-24 2013-06-06 Omron Healthcare Co Ltd Sleep evaluation apparatus
JP2013202101A (en) * 2012-03-27 2013-10-07 Fujitsu Ltd Apneic state decision device, apneic state decision method, and apneic state decision program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113899446A (en) * 2021-12-09 2022-01-07 北京京仪自动化装备技术股份有限公司 Wafer transmission system detection method and wafer transmission system
CN113899446B (en) * 2021-12-09 2022-03-22 北京京仪自动化装备技术股份有限公司 Wafer transmission system detection method and wafer transmission system

Also Published As

Publication number Publication date
JP7089650B2 (en) 2022-06-23
JPWO2020090763A1 (en) 2021-09-24

Similar Documents

Publication Publication Date Title
JP6555692B2 (en) Method for measuring respiration rate and system for measuring respiration rate
EP3334337B1 (en) Monitoring of sleep phenomena
KR101619611B1 (en) Apparatus and method for estimating of respiratory rates by microphone
US8882683B2 (en) Physiological sound examination device and physiological sound examination method
JP5873875B2 (en) Signal processing apparatus, signal processing system, and signal processing method
EP3471610B1 (en) Cardiovascular and cardiorespiratory fitness determination
JP2013518607A (en) Method and system for classifying physiological signal quality for portable monitoring
EP3229692A1 (en) Acoustic monitoring system, monitoring method, and monitoring computer program
CN107106118B (en) Method for detecting dicrotic notch
JP7297190B2 (en) alarm system
JP2013544548A5 (en)
WO2003005893A2 (en) Respiration and heart rate monitor
KR101706197B1 (en) A Novel Method and apparatus for obstructive sleep apnea screening using a piezoelectric sensor
JP6522327B2 (en) Pulse wave analyzer
JP2001190510A (en) Periodic organismic information measuring device
JP7089650B2 (en) Processing equipment, systems, processing methods, and programs
WO2015178439A2 (en) Device and method for supporting diagnosis of central/obstructive sleep apnea, and computer-readable medium having stored thereon program for supporting diagnosis of central/obstructive sleep apnea
JP7122225B2 (en) Processing device, system, processing method, and program
KR102242479B1 (en) Digital Breathing Stethoscope Method Using Skin Image
Rohman et al. Analysis of the Effectiveness of Using Digital Filters in Electronic Stethoscopes
JP2009254611A (en) Cough detector
CN111528831B (en) Cardiopulmonary sound collection method, device and equipment
Makalov et al. Inertial Acoustic Electronic Auscultation System for the Diagnosis of Lung Diseases
US20220361798A1 (en) Multi sensor and method
KR101587989B1 (en) Method of evaluating a value for heart regularity by bioacoustic and electronic stethoscope

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19880656

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020553908

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19880656

Country of ref document: EP

Kind code of ref document: A1