WO2015168299A1 - Biometric-music interaction methods and systems - Google Patents
Biometric-music interaction methods and systems Download PDFInfo
- Publication number
- WO2015168299A1 WO2015168299A1 PCT/US2015/028313 US2015028313W WO2015168299A1 WO 2015168299 A1 WO2015168299 A1 WO 2015168299A1 US 2015028313 W US2015028313 W US 2015028313W WO 2015168299 A1 WO2015168299 A1 WO 2015168299A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- biometric
- data
- content
- music
- state
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 230000003993 interaction Effects 0.000 title description 3
- 238000002560 therapeutic procedure Methods 0.000 claims description 55
- 238000012545 processing Methods 0.000 claims description 24
- 230000033764 rhythmic process Effects 0.000 claims description 16
- 230000029058 respiratory gaseous exchange Effects 0.000 claims description 10
- 238000009877 rendering Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 230000036772 blood pressure Effects 0.000 claims description 5
- 230000002996 emotional effect Effects 0.000 claims description 3
- 230000003155 kinesthetic effect Effects 0.000 claims description 3
- 230000035790 physiological processes and functions Effects 0.000 claims description 3
- 238000002106 pulse oximetry Methods 0.000 claims description 3
- 230000010412 perfusion Effects 0.000 claims description 2
- 230000001755 vocal effect Effects 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 68
- 230000008569 process Effects 0.000 abstract description 15
- 238000004458 analytical method Methods 0.000 description 34
- 239000003607 modifier Substances 0.000 description 33
- 230000006870 function Effects 0.000 description 22
- 230000001020 rhythmical effect Effects 0.000 description 22
- 230000002068 genetic effect Effects 0.000 description 18
- 238000004422 calculation algorithm Methods 0.000 description 17
- 230000035882 stress Effects 0.000 description 16
- 230000000875 corresponding effect Effects 0.000 description 14
- 230000008859 change Effects 0.000 description 13
- 230000036996 cardiovascular health Effects 0.000 description 10
- 238000005070 sampling Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 230000003068 static effect Effects 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 7
- 230000036541 health Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000001413 cellular effect Effects 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000036387 respiratory rate Effects 0.000 description 4
- 206010020772 Hypertension Diseases 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000002526 effect on cardiovascular system Effects 0.000 description 3
- 230000000737 periodic effect Effects 0.000 description 3
- 230000019612 pigmentation Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 241000238876 Acari Species 0.000 description 2
- 201000001320 Atherosclerosis Diseases 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000747 cardiac effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 2
- 238000010348 incorporation Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000037390 scarring Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000001225 therapeutic effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- BZLVMXJERCGZMT-UHFFFAOYSA-N Methyl tert-butyl ether Chemical compound COC(C)(C)C BZLVMXJERCGZMT-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000037326 chronic stress Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000003205 diastolic effect Effects 0.000 description 1
- 235000006694 eating habits Nutrition 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000000004 hemodynamic effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000000554 physical therapy Methods 0.000 description 1
- 230000000276 sedentary effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6887—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
- A61B5/6898—Portable consumer electronic devices, e.g. music players, telephones, tablet computers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
- A61B5/0036—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room including treatment, e.g., using an implantable medical device, ablating, ventilating
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0204—Acoustic sensors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0219—Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/01—Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/021—Measuring pressure in heart or blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Detecting, measuring or recording devices for evaluating the respiratory organs
- A61B5/0816—Measuring devices for examining respiratory frequency
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
- A61B5/14542—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/318—Heart-related electrical modalities, e.g. electrocardiography [ECG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4836—Diagnosis combined with treatment in closed-loop systems or methods
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/361—Mouth control in general, i.e. breath, mouth, teeth, tongue or lip-controlled input devices or sensors detecting, e.g. lip position, lip vibration, air pressure, air velocity, air flow or air jet angle
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/371—Vital parameter control, i.e. musical instrument control based on body signals, e.g. brainwaves, pulsation, temperature or perspiration; Biometric information
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/395—Acceleration sensing or accelerometer use, e.g. 3D movement computation by integration of accelerometer data, angle sensing with respect to the vertical, i.e. gravity sensing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/441—Image sensing, i.e. capturing images or optical patterns for musical purposes or musical control purposes
- G10H2220/455—Camera input, e.g. analyzing pictures from a video camera and using the analysis results as control data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
Definitions
- the field of invention includes technologies relating to the procedural generation of a computerized response based on user biometric measurements.
- the numbers expressing quantities of ingredients, properties such as concentration, reaction conditions, and so forth, used to describe and claim certain embodiments of the invention are to be understood as being modified in some instances by the term "about.” Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the invention may contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
- biometric data e.g., from using cell phone hardware as a photoplethysmograph
- content lists e.g., music play lists, videos, etc.
- cross-modality feedback e.g., musical sonification, visual stimuli, etc.
- guided breathing interventions that are directed towards lowering blood pressure
- Miriam Z. Klipper The relaxation response, Harper Collins, New York, 1992.
- the inventive subject matter provides apparatus, systems and methods to sonify biometric data along by incorporating unique user data and quality data such that the generated music output is interesting and informative to the user, and that assists the user with the use of the system to maximize its effectiveness.
- the system includes an image sensor, such as a camera on a computing device, that the user can use to capture image data of a part of the body.
- the image data can be video data or a plurality of photos taken sequentially.
- the image sensor can also include a flash.
- the system can also include a biometric detection engine communicatively coupled with the image sensor, and a music generation engine communicatively coupled to the biometric detection engine.
- Each of the biometric detection engine and the music generation engine can be computer-readable instructions stored on non-transitory memory (RAM, flash, ROM, hard drive, solid state, drive, optical media, etc.), coupled to one or more processors, where the instructions are executed by one or more processors to carry out various functions associated with the inventive subject matter.
- One or both of the biometric detection engine and the music generation engine can alternatively be a dedicated hardware device, such as a processor, specially programmed to carry out the functions associated with the inventive subject matter.
- One or both of the biometric detection engine and the music generation engine can include interfaces, such as communication or device interfaces, that can communicatively couple with separate devices, such as other computing devices, peripheral devices, etc.
- the image sensor, biometric detection engine and music generation engine can be contained within a single device, such as a cellular phone that includes a camera, or other computing devices that include a suitable image sensor.
- the image sensor, biometric detection engine and music generation engine can each be contained within separate computing devices, communicatively connected via a network.
- the biometric detection engine uses the captured image data derive a biometric signal and extract one or more biometric parameters associated with the signal. For example, the biometric detection engine can derive a periodic biometric signal and extract a heart rate, a percentage of oxygen in the user's blood, a temperature, an electrical signal (e.g., EKG, EEG, etc.), or other biometric parameter.
- the biometric detection engine can be configured to derive the biometric signal based on the RGB values of the received images.
- the biometric parameters can then be used by the music generation engine to create one or more music signals, which can then be transmitted to a media player.
- the media player presents the music signal as music reflecting a biometric state of the user.
- the music signals can be generated by using music features associated with the derived biometric parameters.
- the image data captured by the image sensor can be separated into red, green and blue images.
- Each of the red, green and blue images can be weighted according to the strength of detection of the biometric signal.
- the weight proportion or other aspect of the analysis can then be used as an additional parameter used in the music generation process.
- the optimal weighted proportion between the red, green and blue images can be stored for an individual user, allowing for an optimized detection configuration for an individual's unique skin.
- This allows the system to optimize the detection of biometrics for an individual's skin properties (e.g., tone, color, pigmentation, etc.).
- the weighted proportion can be tailored to account for variances in an individual's skin properties (e.g., a person's skin being more tan in the summer than winter, tan lines, differences in color between a person's skin being cold versus warm, etc.).
- a user can have one or more associated user configurations corresponding to a collection of image capture settings related to the image sensor (e.g., aperture settings, exposure settings, white balancing, etc.).
- the user configurations can be optimized for a particular user's skin characteristics (e.g., color, pigmentation, tone, presence of hair, scarring, etc.).
- a user can have multiple user configurations, to account for differences in skin from one body part to the next.
- a user's profile can be used to store all of the information relevant to a particular user.
- the user's profile can include user-specific data related to the image capture and detection analysis, such as optimized detection configurations, RGB optimized weight values, user configurations related to image capture settings, etc.
- the system can include a user database to store the user profiles for each user.
- the music signal can be modified based on the quality of the images obtained via the camera. Accordingly, if the images are of low quality for the purposes of biometric detection, the music generated can be modified to alert the user to improve the quality of gathered images. The modifications can be perceived by the user as a degradation of the generated music signal, which can be used to guide the user in improving the image detection.
- long-term statistical health predictions e.g., the onset of atherosclerosis, hypertension, dangerous levels of stress, etc.
- the predictions can be incorporated into the music generation to reflect concerns such as the immediacy of the prediction, the severity of the condition predicted, the likelihood of the prediction, and the overall danger to the user's cardiovascular health.
- the inventive subject matter provides apparatus, systems and methods in which a biometric processing system can aggregate biometric data across individuals within a crowd, an audience of a live performance for example, and generate a media presentation based on the aggregated biometric data.
- a biometric processing system that includes a biometric interface and a processing engine.
- Biometric sensor interfaces the audiences' cell phones for example, are configured to acquire biometric data sets from individual members of the crowd.
- Each biometric data set can be considered a digital object representative of the corresponding individual's biometric data (e.g., breath rate, heart rate, stress level, movement, blood pressure, etc.).
- the biometric processing engine uses the biometric interface to obtain and process the biometric data sets.
- the biometric processing engine derives a group biometric state as function of the biometric data sets where the group biometric state is considered to reflect the state of the crowd.
- the group biometric state can include average biometrics (e.g., average heart rate, average breath rate, average movement, etc.) or other statistical information.
- the processing engine compiles content into a media presentation.
- the engine further configures an output device to render the media presentation.
- the media presentation can be rendered for the crowd, an individual, a manager, a performer, a venue owner, or other stakeholder.
- the inventive subject matter provides apparatus, systems and methods in which a biometric processing system can aggregate biometric data across individuals within a crowd, an audience of a live performance for example, and generate a media presentation based on the aggregated biometric data.
- One aspect of the inventive subject matter includes a biometric processing system that includes a biometric interface and a processing engine.
- Biometric sensor interfaces the audiences' cell phones for example, are configured to acquire biometric data sets from individual members of the crowd.
- Each biometric data set can be considered a digital object representative of the corresponding individual's biometric data (e.g., breath rate, heart rate, stress level, movement, blood pressure, etc.).
- the biometric processing engine possibly disposed on cell phone or within a server, uses the biometric interface to obtain and process the biometric data sets.
- the biometric processing engine derives a group biometric state as function of the biometric data sets where the group biometric state is considered to reflect the state of the crowd.
- the group biometric state can include average biometrics (e.g., average heart rate, average breath rate, average movement, etc.) or other statistical information.
- the processing engine compiles content into a media presentation.
- the engine further configures an output device to render the media presentation.
- the media presentation can be rendered for the crowd, an individual, a manager, a performer, a venue owner, or other stakeholder.
- the inventive subject matter provides apparatus, systems and methods in which a computing device can operate as a content management engine to generate playlists, or other types of content listings, based on biometric data.
- One aspect of the inventive subject matter includes configuring a mobile device to operate as a content management, such as by an application.
- the content management engine acquires biometric data from the device, possibly via one or more sensors (e.g., camera, accelerometers, microphone, local sensors, remote sensors, personal area sensors, medical sensors, etc.).
- the engine further derives a bio-state of an entity (e.g., user, human, animal, etc.) as a function of the biometric data.
- the bio-state could include a psychological state, perhaps including stress levels, physiological state, or other bio-state.
- the engine Based on the bio-state, the engine generates a content lists (e.g., music playlist) where the content referenced by the content lists can augment, enhance, diminish, or otherwise engage with the bio-state.
- the content management engine can further configure the mobile device to render the content referenced by the content list.
- the inventive subject matter provides apparatus, systems and methods in which one or more devices are able to detect biometrics and to construct a therapy regimen, possibly including a multimodal therapy.
- the biosensor interface can take on many different forms including a smart phone, medical device, or other computing device configured or programmed to acquire biometric data.
- a camera on a cell phone can capture image data of a person's flesh, which can then be used to determine heart beat, pulse oximetry or other types of biometric data.
- the therapy engine possibly comprising a cell phone or remote server, obtains the biometric data via the biosensor interface.
- the engine can further construct a digital therapy regiment as a function of the biometric data.
- the biometric data can be mapped to one or more types of therapy, perhaps based on a therapy template.
- the therapy regimen can comprise instructions that configure a device to render the therapy according to rendering instructions.
- the rendering instructions can cause the device to render a multimodal therapy having audio, visual, haptic, kinesthetic, or other forms of data.
- FIG. 1 is an overview of a sample implementation of the system and methods of the inventive subject matter.
- FIG. 2 is an overview of the modules of a music generation engine, according to an embodiment of the inventive subject matter.
- FIG. 3 is a schematic of a biometric processing system capable of generating a media presentation as a function of a group biometric state, according to an embodiment of the inventive subject matter.
- Fig. 4 illustrates a method of generating content lists from a bio-state, according to an embodiment of the inventive subject matter.
- FIG. 5 is a schematic of a therapy system, according to an embodiment of the inventive subject matter.
- computing devices comprise a processor configured to execute software instructions stored on a tangible, non- transitory computer readable storage medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.).
- the software instructions preferably configure the computing device to provide the roles, responsibilities, or other functionality as discussed below with respect to the disclosed apparatus.
- the various servers, systems, databases, or interfaces exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods.
- Data exchanges preferably are conducted over a packet-switched network, the Internet, LAN, WAN, VPN, or other type of packet switched network.
- inventive subject matter provides many example embodiments of the inventive subject matter. Although each embodiment represents a single combination of inventive elements, the inventive subject matter is considered to include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed.
- Coupled to is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously.
- the terms “coupled to” and “coupled with” are also used euphemistically to mean “communicatively coupled with” where two or more networked devices are able to send or receive data over a network.
- the disclosed systems and methods relate to sonification of data, and specifically, to a system and method for the musical sonification of a stream of biometric data (e.g., pulse (systolic and diastolic peak detection) and heart rate variability (the variation in the time interval between heart beats), personalization data and quality data.
- biometric data e.g., pulse (systolic and diastolic peak detection) and heart rate variability (the variation in the time interval between heart beats), personalization data and quality data.
- the systems and methods use the hardware of a user's device cell phone hardware and computing functions to construct a software photoplethysmograph (a pulse oximeter), using the device's camera, flash, and engines, modules and methods described below.
- image sensor 101 can be used to take an image of a part of the user's body, preferably a body part having exposed skin (i.e., visible to the image sensor 101.
- the image sensor 101 can be a digital camera suitable for capturing image data, such as the camera on a user's cellular phone.
- the user can place their finger in front of the phone's camera.
- the user can be encouraged or instructed, via visual or audio instructions presented through the cell phone, to take image data of a specific part of the body that maximizes the likelihood of detection, such as a part of the body that is easy for the user to reliably capture with the camera or a part of the body having major blood vessels.
- the cellular phone's flash can be employed to enhance the lighting on the user's body part, such as to compensate for poor environmental image capture conditions.
- the image data captured by the image sensor 101 is preferably a video, but can alternatively be a sequence of still images.
- the image data can be in a video or photo format typically used in digital imaging. Examples of such formats include MPEG, AVI, JPEG, H.26X standards, MOV, AVS, WMV, RAW, etc.
- a biometric detection engine 102 can receive the image data from the sensor 101, and carry out a detection analysis by the biometric detection engine 102 to detect a biometric signal from the received images, and a biometric parameter associated with the biometric signal. To do so, the biometric detection engine 102 can be configured to calculate an average of the red, green and blue (RGB) values of every pixel in images continuously taken from the cell phone camera's stream, calculated at a suitable frame rate per second, such as 20Hz or other frame rates. Each calculated RGB average can be considered a sample.
- RGB red, green and blue
- a buffer of calculated averages can be stored in the device as historical samples.
- the biometric detection engine 102 performs an analysis of the stored averages to determine changes in the RGB average values of the images captured by the camera.
- the blood flow in the finger with each pulse causes a change in the RGB average value of the image captured by the camera.
- the changes in the RGB values can be caused, for example, due to the changes in light reflected from a person's skin at different points of the cardiac cycle.
- the changes in RGB values can be caused by changes in the shape of the skin due to the expansion and compression of blood vessels at different points of the cardiac cycle.
- the biometric detection engine 102 can then apply a statistical autocorrelation to the stored historical samples to determine a periodicity of the changes.
- the detected periodicity can be interpreted as the biometric signal, which can contain biometric parameters (e.g, heart rate, etc.).
- the historical samples can be the last 64 calculated averages.
- the autocorrelation functions can be employed via existing hardware present in cellular phones or other computing devices that offer native Fast Fourier Transform (FFT) functions.
- FFT Fast Fourier Transform
- the autocorrelation analysis can be performed using the Wiener-Khinchin theorem, by which the autocorrelation is computed from the raw data X(t) by applying a Fast Fourier transform (FFT), multiplying the resultant complex vector by its complex conjugate, and finally applying the inverse Fast Fourier transform (iFFT).
- FFT Fast Fourier transform
- iFFT inverse Fast Fourier transform
- the result of the analysis can then be searched for a higher autocorrelation value contained between the 1 th and the 1 index.
- BPM beats-per-minute
- S 20 (the sampling frequency).
- the FFT uses an input size power of 2, so that the number of samples is large enough to contain at least 2 full periods of the heart rate at minimum BPM.
- the biometric detection engine 102 can be programmed to conduct the detection analysis as often as possible. This can be performed by keeping the sample size as small as possible that remains statistically significant.
- the sample size can be set to 64, with a sampling frequency of 20 Hz. This allows for an analysis on 3 seconds of samples, which is 2 periods of the minimum BPM, while limiting a delay in reacting to variations of the heart rate to only 3 seconds.
- the sampling frequency can also be set high enough to allow the detection engine 102 to detect any sudden changes in the average RGB values during the systolic phase. Testing has indicated that 20Hz can be a sufficiently high frequency. Other sampling frequencies are also contemplated.
- the sampling frequency for example, can be set according to a balance between a desired level of precision in detection analysis and available technical resources (e.g., computational/processing limitations, camera limitations, etc).
- biometric parameters contained in the biometric signal can then be determined.
- Power spectral density (PSD) functions can then be employed to derive heart rate variability (HVR) data.
- HVR heart rate variability
- the biometric detection engine 102 can be programmed to perform error correction functions related to the image gathering and detection processes.
- a calculated RGB average sample exceeding certain sample thresholds can be filtered out from the sample set used to calculate a heart rate so that a single outlying sample does not corrupt the calculated heart rate.
- the detection engine 102 can store a history of filtered samples, and use statistical analysis to determine an image quality value.
- the image quality value can be based on one or more of the filtered samples as a percentage of the sample set, the frequency of filtered samples within a sample set, and the number, frequency or pattern of filtered samples across multiple sample sets. For example, reaching or exceeding a certain percentage of individual samples filtered from a sample set or from multiple sample sets can be indicative of a problem with the image capture, such as the user not holding the camera at an appropriate distance from the skin, movement of the camera, poor environmental conditions (e.g. lighting, backlighting, etc.), a dirty lens, etc.
- a problem with the image capture such as the user not holding the camera at an appropriate distance from the skin, movement of the camera, poor environmental conditions (e.g. lighting, backlighting, etc.), a dirty lens, etc.
- the biometric detection engine 102 can be programmed to determine that an error has occurred if the calculated RGB averages over time do not change sufficiently to perform the detection analysis.
- a minimal threshold change in RGB average can be set over a sample set, over multiple sample sets, over a sampling period. This can be indicative, for example, of image gathering problems such as the camera being too far from the skin to cause RGB value changes in the captured images, or that the camera is positioned over an incorrect or unusable part of the body for this purpose.
- the image quality value can be based on the determination that the stream of images does not contain detectable differences with which to conduct the detection analysis.
- the biometric detection engine 102 can be used to modify the image capture settings of the image sensor 101 to maximize the quality of the image data captured for the functions performed by the biometric detection engine 102.
- the biometric detection engine 102 can conduct initialization detection analysis procedures by running the detection analysis for a defined amount of time (for example, enough for two periods of minimum BPM) for each of several different image capture configurations (e.g, pre-defined "default" configurations, last-used configurations, etc.).
- An image capture configuration can be considered a collection of or combination of image sensor settings that can influence the image captured by the camera.
- the settings of the camera can include settings such as aperture, exposure, focus settings, white balancing, contrast, color settings, color filters, flash settings (e.g., timing, brightness, on or off, etc.), resolution settings, frame rate, image stabilization functions, image data format, image data size, etc.
- the first image capture configuration used in the initialization process can be a "default" capture configuration, such as a current configuration of the camera or a camera's own default configuration.
- Subsequent image capture configurations can have changes in one or more of the camera settings that result in a different image being captured by the camera.
- the biometric detection engine 102 can select the image capture configuration that results in the best images for the purposes of detection analysis.
- the number of different image capture configurations tested can be set according to a desired balance of initialization time versus optimizing the image capture configuration.
- the initialization detection analysis procedures can be used to customize the detection analysis for an individual user.
- the initialization detection analysis can determine the optimal image capture configuration for the user's body according to a user's unique skin properties, such as skin color, pigmentation, tone, hair, scarring, etc.
- the image capture configuration for an individual user can be stored in a user's profile as user configurations, and used in future detection analysis sessions. This facilitates a rapid initialization of the system for that user. Multiple user configurations can be created for a user, such as for images captured at different parts of the body where the person's skin properties might vary.
- the user configuration for a particular user can be a global configuration (i.e., it is obtained when the user first uses the system then saved), can be session configuration (i.e., calculated from scratch every time the user uses accesses the system), or can be a combination of the two (e.g., a global configuration that can evolve over time to account for changes in the appearance of the user, a global configuration whose settings can be changed for that session only to account for single-session or short-term changes in the appearance of the user or instant environmental conditions, etc.).
- the image can be separated into three images, each image corresponding to the red, green and blue values of pixels in the image, respectively.
- the average values can then be calculated as discussed above, and the analysis performed to detect a biometric signal for each.
- the biometric detection engine 102 can then evaluate a detection strength based on the detection results for each of the red, green and blue images to determine their individual effectiveness within the detection analysis. This separation of the image and analysis of each separated image can be performed as part of the initialization detection analysis procedures.
- the biometric detection engine 102 can then conduct the detection analysis for the purposes of generating the biometric data by continuing the detection analysis using the "strongest" of the red, green and blue images.
- the biometric detection engine 102 can perform detection analysis on the other two colors individually or in combination for error checking, or to check whether changes in the captured images have changed the "strongest" color (such as because of changes in environmental conditions, camera settings, etc.).
- the detection analysis of the remaining colors can be performed at a different sampling rate than the strongest color, such as at a lower sampling rate.
- the detection engine 102 can generate a color selection identifier containing information related to the color selection.
- the color selection identifier can include identification of the current strongest color, as well as data related to the strength of the selected colors and the remaining colors. Some or all of the information included in the color selection identifier can be integrated into one or more of the image capture configuration and the detection quality value.
- the detection engine 102 can separate the images into the red, green and blue images as described above. The detection engine 102 can then assign weights to each of the red, green and blue values according to their relative strengths. Thus, the calculated average of the RGB value as described above is calculated according to the weighted values of the individual red, green and blue values. As such, the calculated average of the RGB value emphasizes the "strongest" color. The detection engine 102 can generate RGB weight data based on the calculated average that can be incorporated into one or more of the image capture configuration and the detection quality value.
- the system includes a music generation engine 103 configured to transform the biometric data into one or more musical structures.
- the music generation engine 103 can use a moving average of the heart pulse in order to derive a beats-per-minute calculation.
- the beats-per-minute can be used as an internal clock, acting as a global metronome.
- the global metronome can be subdivided into measures, bars, beats or ticks.
- the music generation engine 103 can be a procedural content generation engine (PCG).
- the music generation engine 103 can include various modular components configured to handle specific aspects of the music signal generation.
- Figure 2 illustrates an example, wherein the generation engine 103 can include a rhythmic module 201 configured to generate rhythmic content, a melodic module 202 configured to generate melodic content, a genetic content generation module 203 configured to generate genetic content (e.g., content unique to the user, based on user configurations, etc.), and a quality module 204 configured to generate quality content.
- Each of the modules 201,202,203,204 can be in communication with all of the other modules, permitting the exchange of data, instructions, and allowing for the cooperation between the modules in completing content generation functions.
- the system can make use of the Pure Data (Pd) dataflow system written by Miller Puckette and others, licensed under the terms of the Standard Improved BSD License.
- a parallel project of Pure Data, libpd# (also under BSD license) can be used within the system to build the PCG that can be embedded into mobile operating systems, such as iOS and Android mobile operating systems.
- Examples of music generation algorithms include the artificial intelligence techniques developed by Davide Morelli and David Plans, and described in academic papers (2007, 2008) and David Plans' PhD Thesis, entitled “Remembering the future: genetic co- evolution and MPEG7 matching used in creating artificial music improvisers.”, submitted to the University of East Yale in the UK in July 2008. Procedural generation of musical content in a similar application is also described by the authors in an upcoming IEEE Journal paper (see David Plans and Davide Morelli, "Experience-Driven Procedural Music Generation for Games", IEEE Transactions on Computational Intelligence and AI in Games, special issue: Computational Aesthetics in Games, June 2102).
- the music generation engine 103 can generate a music signal by using the biometric parameters (e.g., heart rate, breath rate, pulse, HRV, etc.), the image quality value and the user's profile, to identify one or more music features.
- biometric parameters e.g., heart rate, breath rate, pulse, HRV, etc.
- the music features can be considered the building blocks of the music signal.
- Examples of music features include rhythm features, melody features, tone features, scale features, tempo features, note features, pitch features, instrument sound features, phrase impulse features, harmony features, beat features, metronome features, timing features, sound sample features, key features, musical progression features, etc.
- the music features can be embodied as instruction sets, rule sets, data sets, data objects, templates, etc.
- the music features can be stored in a music database 105 that can be accessed by the music generation engine 103 during the music generation process.
- the music database 105 can be local to the music generation engine 103 (i.e, stored in the memory of the device housing the music generation engine 103), a database in a remote server accessible by the music generation engine 103 via a network connection, or a combination thereof.
- Music features can be associated with other music features. For example, certain melody features and/or tempo features can be associated with rhythm features that correspond to particular rhythms to which certain melodies and tempos are well suited.
- Music features can also include modifiers that change or modify other music features.
- a music feature can be a rhythm modifier for a rhythm music feature, which can be used to modify the rhythm aspect of the music signal.
- a modifier can be used to emphasize or de-emphasize certain notes, instruments, to change keys, to modify the tempo, etc.
- Modifiers can be used to modify or change music features prior to the generation of the music signal.
- modifiers can be used to modify a music signal after it has been generated.
- Music features can further include quality modifier features.
- Quality modifier features can be used to affect music signal such that the quality of the music experienced by the user is changed.
- the quality modifier features can be features that actively modify the generated music signal quality or features that simulate a change in music signal quality.
- a quality modifier that actively modifies the generated music signal can be a feature that amplifies some or all of the music features used in generating a music signal, or of the generated music signal itself.
- An example of a simulating modifier feature can include a feature that adds audible noise to the music signal.
- Quality modifier features can include quality reduction features that degrade or reduce the quality of the music output.
- music quality reduction features include interference features (i.e., features that interfere with the music signal or the output music), a speaker feedback feature (i.e., noise sounding like speaker feedback to the music signal), a radio static feature (i.e., noise sounding like radio static, such as when a radio is picking up a weak or no signal), a low volume feature (i.e., reduces the volume of the output), a disk skipping feature (i.e., noise that sounds like a skipping compact disc or record), a cassette failure feature (i.e., noise that sounds like a cassette malfunction), a stereo output feature (i.e., can alter the music signal so that the output sound is heard in monaural instead of the default stereo), a speaker failure feature (i.e., the music signal is modified so that the output music sounds like it is being played through a blown or otherwise defective speaker), an echo feature (adding an echo to the audio output), a crowd boo feature (i.e., sounds like a crowd booing at a live event
- the music generation engine 103 can first perform a mapping of the biometric parameters derived from the biometric signal with music features.
- the mapping can be performed according a variety of mapping techniques and mapping rules. For example, the mapping can be performed based on pre-determined relationships between biometric signals and music features. Examples of mapping techniques include one- to-one matching, one-to-many matching, many-to-one matching, statistical analysis, clustering analysis, etc.
- the music generation engine 103 can construct content layers corresponding to the different characteristics of a musical composition. The music signal is then generated based on the content layers, such as by a combination of the content layers or by a sequential, hierarchical relationship. For example, the music generation engine 103 can generate rhythmic content, melodic content, genetic content, and quality content.
- the music generation engine 103 can use the biometric parameter corresponding to the heart beat in beats-per-minute (BPM) can correlate to metronome features, which can be used to generate a global metronome as a foundation for the music signal.
- BPM beats-per-minute
- the metronome features can include subdivision or marker data that allow the generated global metronome to be sub-divided into measures, bars, beats or ticks. In embodiments, this global metronome can then be used as a baseline for the modular components that instrument the real-time, automatic music generation algorithm.
- the rhythmic characteristics of a generated musical composition can be thought of as the rhythmic content of the music signal.
- the rhythmic content can be generated by synchronizing rhythmic music features with the global metronome.
- the rhythmic features used in the generation of the music signal can also be directly associated with bands of cardiovascular parameters or other periodic biometric parameters.
- a heart rate in BPM can be subdivided into bands of 45-65, 65-75, 75-90, 90-110 and higher than 110 BPM.
- a heart rate parameter falling into a particular band can correspond to rhythmic features associated with that band, and a change in band will result in a change in rhythmic features used to generate the music signal.
- the rhythmic structure of the music signal can be dynamically adjusted according to changing biometric conditions.
- rhythm features used can include rhythm features (e.g, metric levels, rhythm units, etc), beat features, and other music features that can affect the rhythm of a musical composition.
- Music features used in the generation of rhythmic content can also include sound features and instrument features, such as those typically associated with rhythm in music. Examples of this kind of music feature can include synthesized sounds and/or sampled drum audio, bass sounds, etc.
- the generation of the rhythmic content can be performed by a rhythmic module 201.
- the rhythmic module 201 can include procedural music generation algorithms that can synch music features to the global metronome.
- the generation of rhythmic content can respond directly to the bands of cardiovascular data or other periodic biometric data described above.
- Each band can include trigger messages that instruct the rhythm module to adjust the rhythm.
- the rhythm adjustment can be achieved by retrieving new music features that synchronize with the current band (and global metronome), or by modifying the existing music features according to the current band BPM average.
- the melodic characteristics of a music composition can similarly be thought of as the melodic content of the music signal.
- the musical features used in the generation of the melodic content can include phrase impulse features (i.e., a series of notes of a certain length), melody features, pitch features, note features, tone features, key features, tempo features, harmony features, scale features, progression features, etc.
- Scale features can include features associated with scale degrees, such as major (Ionian), minor (Aeolian), Phrygian, Lydian and whole tone. Sound and instrument features typically associated with melody can also be associated with the melodic content. Examples of this type of music feature can include synthesized and/or sampled voice audio, instrument audio, etc.
- the music features used in generating melodic content can be associated with biometric parameters directly, including the some or all of the same biometric parameters used in the generation of the rhythm units.
- a selected music feature can also be indirectly associated with biometric parameters, wherein the music feature is selected based on its associations with another selected music feature that is directly associated with a biometric parameter.
- Music features used in generating melodic content can also be selected based on the rhythmic content (by way of the global metronome).
- the melodic content can be generated by the melodic module 202.
- the melodic module 202 can include content generation algorithms to procedurally generate melodic content.
- the melodic module 202 can be in communication with the rhythmic module 201.
- the melodic module 202 can exchange messages from the rhythmic module 201 (by way of the global metronome) that trigger the incorporation of musical features, such as phrase impulses, and to ascertain the global tempo.
- the global tempo can be ascertained via tempo features, algorithmically, or a combination thereof.
- biometric signals are translated into melodic content via their corresponding music features.
- the melodic module 202 can further include a concept of scale degrees based on scale features, such as those described above.
- User profile information can also be employed in the music signal generation process.
- One or more aspects of the user profile information can be mapped to music features, which are then included in the generation process, allowing for the customization of the generated music signal to the individual user.
- the music features used can be additional music features from the same pool used to map biometric parameters and/or modifier features used to modify selected music features as described above.
- the mapping of music features with user profile information can be performed using the techniques described above.
- the individual user configurations within a user profile can each correspond (and be mapped to) to different music features that are incorporated into the music generation process.
- the music signal generated will be unique to the user, and also unique to the conditions in which a particular user configuration is used for image capture. For example, if a user has multiple user configurations corresponding to different parts of the body, the music generated by capturing image data of the user's finger will be different than the music generated by capturing image data of the user's face. Over time the user can form associations of music signals having a particular modified sound with the body part that causes the particular modification.
- the music (and/or modifier) features corresponding to user profile information can be used to create the genetic content of a music signal.
- the genetic content can be the collection of music and/or modifier features associated with the user profile information, or can be created as a function of these features.
- the genetic content can also be created using the user profile data either by itself or in combination with associated music/modifier features.
- the genetic content of the music signal can be generated by a genetic content generation module 203.
- the music and/or modifier features of the genetic content can be used to modify certain aspects of the music signal, such as certain music features implemented in the rhythmic and melodic contents.
- the genetic content can be used to modify the rhythmic and/or melodic content as a whole.
- the genetic content generation module 203 can include procedural generation algorithms (such as those implemented by Morelli and Plans discussed above) that can generate unique genetic content.
- the user profile and/or the features can be used as seed elements for the generation of the genetic content by the genetic content generation module 203.
- the user configuration can be used as a seed to generate the genetic content, which will modify the music signal and thus change the music audio heard by the user. Using a different user configuration will result in a different genetic content and thus, a different music signal.
- Other seed values can be used in addition to the user profile data, such as cell phone device ID, telephone number, MAC address, IP address, HVR signature, location information (e.g., GPS, triangulation, etc.), and health status information.
- the seed values can be hashed values, such as a one-way hash function string.
- the music generation engine 103 can also incorporate quality modifier features into the music signal.
- the quality modifier features can be selected by the music generation engine 103 based on the image quality value reflecting the quality of the captured image data.
- Image quality values can be mapped, such as via the techniques described above, to one or more corresponding quality modifier features.
- the association between image quality values and quality modifier features can be based on one or more factors such as the technique used to derive image quality values, the image capture factors affecting the image quality values, etc.
- the image quality value can include information indicating an intensity of the corresponding quality modifier feature, which can be used to control the degree to which a quality modifier feature will affect a music signal. This intensity information can be, for example, associated with a magnitude of the image quality value, or the image quality value meeting or exceeding certain thresholds, tolerances, ranges, etc.
- the image quality value can correspond to a quality modifier feature that introduces an effect simulating radio static to the music signal, with the intensity of the effect proportional to the level of captured image data quality as represented by the image quality value.
- an image quality value reflecting captured image data having a high quality e.g., having an acceptable number of errors, etc.
- the static effect can be increased. If the image data received is of such low quality so as to be nearly or completely unusable, the static effect can be made to completely drown out any other audio.
- This use of a quality modifier feature can be used by the system to direct the user to correct low-quality image capture, such as by changing the position of the image sensor relative to the captured body part to result in music with "better reception” via reduced (and ultimately eliminated) "radio static”.
- the modification of the music signal via a quality modifier further allows the system to direct the user into performing corrections via audio cues, such that the user can make the corrections in image capture situations where the body part being captured does not permit the user to see the device's screen.
- a sudden change of image quality value (e.g., a change in image quality value exceeding a certain magnitude) can be associated with a quality modifier value.
- a sudden drop in image quality value (e.g., a change in image quality exceeding an acceptable magnitude occurring within a certain time) can be associated with a quality modifier value that introduces an audio effect of a record or disk skipping into the music signal.
- the skipping effect can be repeated until the image quality value return to acceptable levels.
- these quality modifier values can be used to provide an indication of the occurrence of single, sudden events (such as a sudden movement of the image sensor or temporary loss of focus) that can also serve as a reminder to the user to keep the image sensor sufficiently still to be capable of capturing usable image data.
- these quality modifier values can be used by the system to alert the user to an ongoing problem with the image capture and direct the user to take action, such as an alert that the placement of the image sensor relative to the body part in the image sensor being obscured resulting in the user moving the image sensor away from the body part.
- the quality modifier features can be incorporated into the music signal via the creation of quality content.
- the quality content can be procedurally generated by a quality module 204, and can be a collection of the quality modifier(s) and quality modifier value(s) associated with an image quality value.
- the quality module 204 can transmit a message to the other modules instructing them to adjust their respective content generation to account for the interruption, or to wait until further instruction to resume (such as for extended interruptions).
- the music generation engine 103 can be configured to incorporate externally generated audio samples (such as studio-generated music tracks) into the music signal by syncing the sample to the global metronome. To enable this function, the music generation engine 103 can incorporate sample media players such as players having Pure Data components.
- the generated music signal can be provided to a media player 104 to produce audio signals according to the music signal, such as music.
- the music generation engine 103 can export the music signal to the media player 104.
- the music generation engine 103 can also provide instructions, applications, or other code to enable the music player 104 to convert the music signal into audio signals.
- the music signal can be generated in a various formats that allow for the conversion of the music signal into audio output.
- Examples of music signals can include a collection of tracks combinable to form the music (such as via music editing programs), a generated music data file in standard or commonly-used digital audio formats (e.g., MIDI, MP3, WAV, streaming music, etc.), electrical audio input signals used to generate music in a speaker, etc.
- the music generation engine 103 can include one or more procedural content generation algorithm 205 in addition to, or instead of, the individual procedural content generation algorithms belonging to the modules.
- the additional algorithms 205 can perform content generation for modules lacking native procedural content generation algorithms, based on generation instructions from these modules (e.g., what music features to for that particular content, when to switch features, etc.).
- the additional algorithms 205 can perform the function of combining the separate content items from their respective modules into a finalized music signal.
- the music generation engine 103 can be entirely contained within a cellular telephone device or other suitable computing device. As such, the procedural content generation engages the user at a device-oriented point of interaction.
- the music generation engine 103 can be distributed across multiple devices networked together, or can reside entirely in a remote device such as a server.
- the system can further include server applications, such as applications allowing users to sync data from their mobile devices to a web application.
- the server applications can include the storage of user profiles in a user database 106 stored on one or more remote servers accessible via the user client devices.
- the servers can also include the music feature database 105. Generated music signals can also be stored on the servers, allowing for a user to retrieve previously generated music signals to listen to again, or to share with other users.
- User data can be obtained via one or more JSON functions within system application developer packages. For example, heart rate variability can be obtained and then sent to a server application.
- the server application can be written in Node.js (a JavaScript web framework) and configured to accept JSON requests and store these requests in a custom database, such as those contained in a MongoDB instance.
- the client-server system architecture enables various server-side services, such as push-based health information. For example, the user can receive information regarding the user's level of stress, derived from HRV data on a particular point of his/her week.
- the database can be configured to store information on user data from each mobile session. Further, the database can be designed to store other relevant information such as unique keys and usage data.
- An analysis server can utilize one or more algorithms to analyze user data using data mining techniques for timed series. Segmentation algorithms such as “Sliding Windows” or “SWAB” and “Piecewise Linear Representation” allow for removing noise of measurements and split data into segments with similar features (flatness, linearity, monotonic, etc.). Pattern detection, clustering and classification will then be performed to detect periodicity and outliers in trends. Algorithms like Artificial Neural Networks, Self Organizing Maps, Support Vector Machines or Bayesian classifier can be used to classify, detect patterns and assign indirect measures to segments. Additional techniques can be performed including Principal Component Analysis (PCA) or K-means to reduce the dimensionality of the data, and to look for meaningful patterns and periodicity.
- PCA Principal Component Analysis
- K-means K-means
- the analysis server can apply these techniques to collected user data and forecast events (such as stress peaks) before they happen.
- the forecasted events can be communicated to the user and viewed on their device.
- the analysis and forecasting can be performed by a health inference engine within the analysis server, and the user data used in forecasting can be the generated music signal, the raw biometric signal derived from the image data, or a combination of the two.
- a forecasted event can have associated music features, and can be used to modify a generated music signal.
- the server can communicate the event as well as associated music generation data (e.g., music features, seed values, etc.) to the music generation engine 103.
- the forecasted event can then be integrated into the music generation process, such as by integrating the forecasted event data into the genetic content generation.
- the nature of the music features associated with a particular forecasted event (and consequently, the effect the music features will have on the generated music signal) can be related to factors such as the immediacy of the forecasted event, the severity of the forecasted event, the likelihood the forecasted event will occur, and the risk the forecasted event presents to a user.
- an explanation of reason behind the modification of the music can be provided to the user as well, so that the user can be aware of the forecasted event.
- biometric data can be collected and aggregated across a crowd, an audience for example. Further, as discussed below, the aggregated biometric data can be converted to a multimodal presentation for consumption by a stakeholder.
- the disclosed system can offer a live environment that allows the audience or performer to interact in unprecedented ways. As such, one will appreciate that the disclosed techniques provide for aggregation of biometric information and transforming the biometric information to a set of multimedia presentation signals that command a device to present a multimedia presentation.
- the invention provides a biometric data processing system including: a) a biometric interface configured to acquire biometric data sets from distinct biometric sensor platforms, each biometric data set corresponding to an individual; and b) a biometric processing engine coupled with the biometric interface and configured to: obtain the biometric data sets; derive a group biometric state as a function of the biometric data sets; generate a media presentation based on the group biometric state; and configure a presentation computing system to render the media presentation.
- the system is adapted to include a plurality of smart phones configured to operate as the biometric sensors.
- the group biometric state may include an average biometric, such as one or more of a heart rate, a breath rate, a galvanic value, a movement, a tension, a stress level, a perfusion, an airway patency, and a blood pressure.
- an average biometric such as one or more of a heart rate, a breath rate, a galvanic value, a movement, a tension, a stress level, a perfusion, an airway patency, and a blood pressure.
- Figure 3 presents an overview of a crowd-based biometric acquisition system within the context of a live performance.
- the venue could also include a game, a sporting event, a movie, a play, a concert, an on-line video gaming event, a television show, or other environment in which multiple individuals can participate as a group or a crowd.
- members of the crowd have mobile device, their cell phones for example, configured as biometric acquisition systems that collect biometric data sets of the individuals.
- biometric acquisition systems that collect biometric data sets of the individuals.
- Example techniques for configuring a cell phone to collect biometric data include those discussed herein.
- the biometric data sets can be acquired via a biometric interface (e.g., the crowd's cell phones, an HTTP server, etc).
- a biometric processing engine collects or otherwise obtains the biometric data sets, for example over a network (e.g., cell network, internet, peer-to-peer network, WAN, LAN, PAN, etc.).
- the engine leverages the data sets to derive a group biometric state reflecting the biometric status of the group of a whole.
- the group biometric state might include a trend in an average heart rate across the crowd.
- the group biometric state can include statistical information reflecting the aggregation of data across the biometric data sets including heart rate data, breathing rate data, derive stress levels, group movement (e.g., based on geolocation information, accelerometry, video tracking, etc.), synchronized behaviors (e.g., jumping or dancing to the beat of a song), or other types of data.
- Each attribute value (e.g., heart rate) of the group biometric state could be multi-valued where the corresponding attribute value could include an average and a range (e.g., statistical spread, minimum, maximum, etc.).
- the processing engine generates one or more media presentations based on the group biometric state where the media presentation could include visualization elements, sonified elements, graphic data, animations, video, or other types of content.
- the performer, venue owner, audience, crowd, or other stakeholder can observe the group state as well as interact with the media presentation.
- any stakeholder e.g., performer, venue owner, etc.
- a performer on stage could observe that the group heart rate is trending up based on the visual or sonified information within the media presentation. Based on the information, the performance could play a slower song in an attempt to lower the heart rate of the crowd, or even play a faster song to increase the rate at which the group's collective heart rate is trending up.
- the media presentation can include any type of content modality including, but in no way limited to image data, video data, computer generated graphics, real-time graphics, audio data, game data, advertisements, or other types of date.
- the content can be obtained from a content database storing desirable types of content.
- the content could be indexed by biometric state attributes or their values (e.g., heart rate, heart rate value, etc.).
- biometric state attributes e.g., heart rate, heart rate value, etc.
- the present invention contemplates use of a photoplethysmograph to derive a physiological or psychological state. Such bio-state information can then be leveraged to construct one or more content lists (e.g., music play lists, videos, etc.) that can be curate to the state. As such, one will appreciate that the disclosed techniques provide for converting biometric data into a bio-state, which in turn can be used to generate signals that configure a device to present content related to the bio state.
- content lists e.g., music play lists, videos, etc.
- the present invention provides technology solutions that offer insight into users' cardiovascular health in a comprehensible way that requires no medical knowledge, and that uses no specialist hardware. Rather the techniques leverage ubiquitous devices, such as a mobile phone, gaming device, kiosk, vehicle or other computing device to collect and analyze biometric data to make statistical predictions as to hypertension, levels of stress, or other bio-states.
- the invention provides a method which includes: a) configuring a mobile device to operate as a content management engine; acquiring, by the content management engine, biometric data from the mobile device; deriving, by the content management engine, bio-state of an entity as a function of the biometric data; generating, by the content management engine, a content list from the bio-state; and configuring the mobile device to render content associated with the content list.
- the bio- state may be by a physiological state or a psychological state.
- the step of generating the content list may further include classifying content according to an ontological classification scheme according to bio-state attributes representative of the bio-state, such as, but not limited to one or more of stress levels, endurance levels, galvanic response, emotional characteristics, and vocal tone.
- the content list may include one or more of the following types of content: image data, audio data, music data, video data, game data, shopping data, medical data, and family data.
- the disclosed approach uses the photoplethysmograph signal acquired by the methods previously outlined, to determine the subjects' bio-state (e.g., physiological and/or psychological status) in terms of relative bio- state attributes (e.g., stress levels, emotional state, tone of voice, and the like).
- bio-state e.g., physiological and/or psychological status
- relative bio- state attributes e.g., stress levels, emotional state, tone of voice, and the like.
- This data is then filtered through an ontological algorithm that pairs particular levels of stress with particular clusters of content attribute, music timbre for example, (e.g., high frequency content might not pair well with high level of stress) in order to derive a psycho-content correlation between biometric data and particular genres of content.
- paces Beats Per Minute
- other parameters can be mapped to content features; rhythm, melody, speech patterns, etc.
- the mobile device operating as a content management engine can engage APIs such as the Echo NestTM to search for music titles, or other content, that match the results of the ontological search.
- the target content can be streamed via services such as, but not limited to RDioTM, SpotifyTM and Bloom.fmTM, to dynamically build playlists that can be returned to the device of the instant invention, e.g., mobile device.
- This playlists generation engine is advantageously service agnostic and is modularly designed, to be ready for incoming music services such as Beats By Dre's Project DaisyTM, that might offer more forward-thinking personalization API points.
- the present invention contemplates providing cross-modality feedback (e.g., musical sonification, visual stimuli, etc.) from biometric data to provide therapeutic effects on the user.
- a device of the present invention such as a cell phone operating as a photoplethysmograph configured to generate a visualization or audio representation of the user's breath rate, can be utilized to influence physical or mental therapy.
- the disclosed techniques provide for observing biometrics of an entity and using corresponding biometric data to generate signals that cause a device to render a therapy to the entity.
- the invention provides a therapy system which may include: a) a biosensor interface configured to acquire biometric data; and b) a therapy engine coupled with the biosensor interface and configured to: obtain the biometric data via the biosensor interface; construct a digital therapy regimen associated with a therapy as a function of the biometric data, the digital therapy regime comprising rendering instructions; and configure a device to render the digital therapy regimen according to the rendering instructions.
- Figure 5 illustrates a therapy system in one embodiment of the invention.
- the system of Figure 5 is configured to make long-term statistical health predictions such as the onset of atherosclerosis, hypertension, or dangerous levels of stress through observation of biometric device obtained via a sensor interface (e.g., a person's smart phone). In one embodiment, this may be considered in the context of chest physiotherapy or breathing- focused mindfulness-based stress reduction.
- the system can obtain a photoplethysmograph signal acquired via image data not only to determine the subjects' hemodynamic status, but to also determine breath rate, such as viaone or more accelerometry sensors (i.e., acceleration data from the phone's own sensors, movement, etc.).
- Example techniques that estimate respiratory rate from photoplethysmogram data are described in Chon, K. H., Dash, S., & Ju, K. (2009) titled “Estimation of Respiratory Rate From Photoplethysmogram Data Using Time & Frequency Spectral Estimation” (Biomedical Engineering, IEEE Transactions on. doi: 10.1109/TBME.2009.2019766).
- the biometric data can be mapped to one or more therapy regimens that could include multimodal outputs including visualizing or sonifying aspects of human breath.
- the digital therapy regimen includes instructions to generate at least one of the following modalities of data: audio data, image data, video data, haptic data, kinesthetic data, olfactory data, and taste data.
- biometric data information can be derived from either or both an accelerometry or photoplethysmographic perspective.
- the biometric data that is collected may include at least one of the following: a breath rate, image data, audio data, accelerometer data, pulse oximetry data, EKG data, EEG data, galvanic data, and temperature data.
- data from a photoplethysmograph signal, acquired from a mobile phone's camera, and from the movement signal, acquired from a mobile phone's accelerometer/gyroscope provide sonification and visualization feedback, in order to achieve particular chest therapy and mindfulness therapy techniques, in particular Active Cycle Breathing and Pranic (Pranayama) Breathing exercises.
- the invention provides a system as outlined in Figure 5 which includes the following components:
- Biosensor Interface A smart phone as a sensor platform, for example
- Heart Rate Detector (possibly in therapy engine): Derive heart rate based on R-R peak detector algorithm
- Kalman filter possibly in therapy engine: Enhance heart rate signal using correlated accelerometer signal, filtering out noise from movement
- Breath Rate Detect (possibly in therapy engine): Map accelerometry data to a breath rate
- Audio Engine (smart phone, user device): Present audio modality according to rendering instructions via device's audio system
- Fractals Generator Generate images or graphics according to rendering instructions
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Veterinary Medicine (AREA)
- Psychiatry (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Epidemiology (AREA)
- Multimedia (AREA)
- Primary Health Care (AREA)
- Educational Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1620186.5A GB2545096A (en) | 2014-04-29 | 2015-04-29 | Biometric-music interaction methods and systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/265,261 | 2014-04-29 | ||
US14/265,261 US10459972B2 (en) | 2012-09-07 | 2014-04-29 | Biometric-music interaction methods and systems |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015168299A1 true WO2015168299A1 (en) | 2015-11-05 |
Family
ID=54359297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2015/028313 WO2015168299A1 (en) | 2014-04-29 | 2015-04-29 | Biometric-music interaction methods and systems |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2545096A (en) |
WO (1) | WO2015168299A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106982392A (en) * | 2016-12-28 | 2017-07-25 | 深圳市创达天盛智能科技有限公司 | The control method for playing back and its device of a kind of multi-medium data |
WO2018167706A1 (en) * | 2017-03-16 | 2018-09-20 | Sony Mobile Communications Inc. | Method and system for automatically creating a soundtrack to a user-generated video |
CN110831496A (en) * | 2017-07-06 | 2020-02-21 | 罗伯特·米切尔·约瑟夫 | Biometrics data sonification, state song generation, bio-simulation modeling and artificial intelligence |
US20230268080A1 (en) * | 2022-02-24 | 2023-08-24 | Korea Advanced Institute Of Science And Technology | Analysis system and method for causal inference of digital therapeutics based on mobile data |
EP4411722A1 (en) * | 2023-02-03 | 2024-08-07 | Anthony Noah Thomsen | Providing an adaptive tonetrack depending on a user context |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040044897A1 (en) * | 2002-04-25 | 2004-03-04 | Ritech International Hk Ltd | Biometrics parameters protected computer serial bus interface portable data storage device and method of proprietary biometrics enrollment |
US20080275915A1 (en) * | 2003-09-30 | 2008-11-06 | Microsoft Corporation | Image File Container |
WO2013166341A1 (en) * | 2012-05-02 | 2013-11-07 | Aliphcom | Physiological characteristic detection based on reflected components of light |
WO2013170032A2 (en) * | 2012-05-09 | 2013-11-14 | Aliphcom | System and method for monitoring the health of a user |
-
2015
- 2015-04-29 GB GB1620186.5A patent/GB2545096A/en not_active Withdrawn
- 2015-04-29 WO PCT/US2015/028313 patent/WO2015168299A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040044897A1 (en) * | 2002-04-25 | 2004-03-04 | Ritech International Hk Ltd | Biometrics parameters protected computer serial bus interface portable data storage device and method of proprietary biometrics enrollment |
US20080275915A1 (en) * | 2003-09-30 | 2008-11-06 | Microsoft Corporation | Image File Container |
WO2013166341A1 (en) * | 2012-05-02 | 2013-11-07 | Aliphcom | Physiological characteristic detection based on reflected components of light |
WO2013170032A2 (en) * | 2012-05-09 | 2013-11-14 | Aliphcom | System and method for monitoring the health of a user |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106982392A (en) * | 2016-12-28 | 2017-07-25 | 深圳市创达天盛智能科技有限公司 | The control method for playing back and its device of a kind of multi-medium data |
WO2018167706A1 (en) * | 2017-03-16 | 2018-09-20 | Sony Mobile Communications Inc. | Method and system for automatically creating a soundtrack to a user-generated video |
US10902829B2 (en) | 2017-03-16 | 2021-01-26 | Sony Corporation | Method and system for automatically creating a soundtrack to a user-generated video |
CN110831496A (en) * | 2017-07-06 | 2020-02-21 | 罗伯特·米切尔·约瑟夫 | Biometrics data sonification, state song generation, bio-simulation modeling and artificial intelligence |
US20230268080A1 (en) * | 2022-02-24 | 2023-08-24 | Korea Advanced Institute Of Science And Technology | Analysis system and method for causal inference of digital therapeutics based on mobile data |
EP4411722A1 (en) * | 2023-02-03 | 2024-08-07 | Anthony Noah Thomsen | Providing an adaptive tonetrack depending on a user context |
Also Published As
Publication number | Publication date |
---|---|
GB201620186D0 (en) | 2017-01-11 |
GB2545096A (en) | 2017-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200012682A1 (en) | Biometric-music interaction methods and systems | |
US9330680B2 (en) | Biometric-music interaction methods and systems | |
US11342062B2 (en) | Method and system for analysing sound | |
US11786163B2 (en) | System and method for associating music with brain-state data | |
US20200286505A1 (en) | Method and system for categorizing musical sound according to emotions | |
US20200012959A1 (en) | Systems and techniques for identifying and exploiting relationships between media consumption and health | |
US11690530B2 (en) | Entrainment sonification techniques | |
US9467673B2 (en) | Method, system, and computer-readable memory for rhythm visualization | |
US11205408B2 (en) | Method and system for musical communication | |
Ellamil et al. | One in the dance: musical correlates of group synchrony in a real-world club environment | |
US12032620B2 (en) | Identifying media content | |
US11308925B2 (en) | System and method for creating a sensory experience by merging biometric data with user-provided content | |
WO2019136485A1 (en) | Content generation and control using sensor data for detection of neurological state | |
WO2015168299A1 (en) | Biometric-music interaction methods and systems | |
US20220202312A1 (en) | Respiratory Biofeedback-Based Content Selection and Playback for Guided Sessions and Device Adjustments | |
JP2014123085A (en) | Device, method, and program for further effectively performing and providing body motion and so on to be performed by viewer according to singing in karaoke | |
WO2023179765A1 (en) | Multimedia recommendation method and apparatus | |
US20220036757A1 (en) | Systems and methods to improve a users response to a traumatic event | |
US20220319477A1 (en) | System and method for creating a sensory experience by merging biometric data with user-provided content | |
US20220036999A1 (en) | Systems and methods to improve a users mental state | |
US20220398063A1 (en) | Systems and methods for identifying segments of music having characteristics suitable for inducing autonomic physiological responses | |
KR20240016166A (en) | Heartbeat communication device using network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15786425 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 201620186 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20150429 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1620186.5 Country of ref document: GB |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15786425 Country of ref document: EP Kind code of ref document: A1 |