[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024167421A1 - Person monitor - Google Patents

Person monitor Download PDF

Info

Publication number
WO2024167421A1
WO2024167421A1 PCT/NZ2024/050010 NZ2024050010W WO2024167421A1 WO 2024167421 A1 WO2024167421 A1 WO 2024167421A1 NZ 2024050010 W NZ2024050010 W NZ 2024050010W WO 2024167421 A1 WO2024167421 A1 WO 2024167421A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
sensor
monitor
person monitor
audio
Prior art date
Application number
PCT/NZ2024/050010
Other languages
French (fr)
Inventor
Karen Frances MATTHEWS
Richard Warwick Jones
Original Assignee
Design Electronics Limited (Administrator Appointed And In Liquidation)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Design Electronics Limited (Administrator Appointed And In Liquidation) filed Critical Design Electronics Limited (Administrator Appointed And In Liquidation)
Publication of WO2024167421A1 publication Critical patent/WO2024167421A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/10Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using wireless transmission systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/002Monitoring the patient using a local or closed circuit, e.g. in a room or building
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/0423Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting deviation from an expected pattern of behaviour or schedule
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0469Presence detectors to detect unsafe condition, e.g. infrared sensor, microphone
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/22Status alarms responsive to presence or absence of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/08Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0204Acoustic sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/083Measuring rate of metabolism by using breath test, e.g. measuring rate of oxygen consumption
    • A61B5/0836Measuring rate of CO2 production
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6889Rooms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • G10L15/07Adaptation to the speaker
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation

Definitions

  • This invention relates to person monitors and systems including person monitors.
  • Person monitors can be used to monitor a person. They may include multiple sensors distributed throughout the monitored person's building.
  • a person monitor configured to monitor a person, the person monitor comprising: at least one audio sensor, an electronic processor in communication with the at least one audio sensor, and an electronic storage device configured to store at least one audio signature; wherein the electronic processor is configured to: receive data corresponding to a sound sensed by the at least one audio sensor, compare, using an audio machine learning model, the data with the at least one audio signature, and issue an alert based at least partially on the comparison; wherein the audio machine learning model is at least partially trained using audio training data comprising a sound made by the person.
  • a person monitor configured to monitor a person, the person monitor comprising: at least one audio sensor, an electronic processor in communication with the at least one audio sensor, and an electronic storage device configured to store at least one audio criteria; wherein the electronic processor is configured to: receive data corresponding to a sound sensed by the at least one audio sensor, compare the data with the at least one audio criteria, and issue an alert based at least partially on the comparison; wherein the at least one audio criteria is at least partially user-defined by the person.
  • a system for monitoring a person comprising: a cloud server, and a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, a transceiver in communication with the cloud server, an electronic storage device storing sensor configuration information, the sensor configuration information comprising: a sensor activation state, and/or a sensor acquisition rate; and an electronic processor in electronic communication with the at least one sensor, the transceiver, and the electronic storage device; wherein: the at least one sensor: is configured to acquire data at a rate based at least partially on the sensor acquisition rate, and/or is activated or deactivated based at least partially on the sensor activation state; and the electronic processor is configured to: receive, from the cloud server via the transceiver, updated sensor configuration information, update the sensor configuration information stored within the electronic storage device with the updated sensor configuration information, and adjust a sensor activation state and/or sensor acquisition rate based at least partially on the updated sensor configuration information.
  • a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and an electronic storage device configured to store at least one previously determined state and associated time of determination; wherein the electronic processor is configured to: determine a state of the monitored person, and associate the determined state with a time of determination; wherein the determined state of the monitored person is based at least partially on: an output of the at least one sensor, and at least one previously determined state and associated time of determination stored within the electronic storage device.
  • a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, a transceiver in communication with the electronic processor, an electronic storage device in communication with the electronic processor, and a battery; wherein: power consumed by the person monitor is provided by the battery of the person monitor, and the power consumed by the person monitor is no greater than 1 mW.
  • a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and an electronic storage device in communication with the electronic processor, the electronic storage device configured to store a previously determined location and associated time of determination; wherein the electronic processor is configured to determine a location of the monitored person based at least partially on: an output from the at least one sensor, and at least one previously determined location and associated time of determination stored within the electronic storage device.
  • a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and a receiver configured to receive data from at least one healthcare device; wherein the electronic processor is configured to determine a state of the monitored person based at least partially on an output of the at least one sensor and the data received from the at least one healthcare device.
  • a person monitor configured to monitor at least one person within a building comprising at least one room, the person monitor comprising: at least one CO2 sensor, and at least one H2O sensor; and an electronic processor in communication with the at least one CO2 sensor and at least one H2O sensor; wherein: the processor is configured to determine a number of people within a room of the building, the determination based at least partially on: an output from the at least one CO2 sensor, and an output from the at least one H2O sensor.
  • a person monitor configured to monitor at least one person within a building comprising a plurality of rooms, the person monitor comprising: at least one CO2 sensor, at least one H2O sensor, and an electronic processor in communication with the at least one CO2 sensor and at least one H2O sensor; wherein: the processor is configured to detect a movement event corresponding to the at least one person moving from a first room to a second room, the detection based at least partially on: an output from the at least one CO2 sensor, and an output from the at least one H2O sensor.
  • Figure 1 illustrates an example person monitor.
  • Figure 2 depicts an example network including a person monitor.
  • Figure 3 illustrates an example system including a pattern machine learning model.
  • Figure 4 depicts an example method of detecting a trend in a monitored person.
  • Figure 5 depicts an example system comprising a person monitor and a cloud server.
  • Figure 6 shows an example person monitor configured to determine a state of a monitored person.
  • Figure 7 depicts an example person monitor configured to determine a location of a monitored person.
  • Figure 8 depicts a person monitor configured to monitor at least one person within a building comprising at least one room.
  • Figure 9 depicts an example of a person monitor configured to monitor a person.
  • Figure 10 depicts a further example of a person monitor configured to monitor a person.
  • Person monitors are used to monitor the status and wellbeing of people, particularly people who may be vulnerable such as elderly people living alone, disabled people, or people receiving post-operative care.
  • FIG. 1 illustrates a person monitor 100 according to one example.
  • the person monitor 100 can comprise at least one sensor 110 and an electronic processor 120 in communication with the at least one sensor 110.
  • the electronic processor 120 can be used to process data generated by the at least one sensor 110 and/or execute instructions received from a server, such as a cloud server, as described in more detail herein.
  • the at least one sensor 110 can be an array of sensors. In most examples, the at least one sensor 110 is/are localised within a housing of the person monitor 100 rather than being distributed throughout a building.
  • the modality and/or processing of data from the at least one sensor 110 can be chosen to preserve the privacy of the monitored person. For example, the at least one sensor 110 can intentionally omit a camera to prevent prying.
  • the person monitor 100 can further comprise a transceiver 130 in communication with the electronic processor 120.
  • the transceiver 130 can be used for bidirectional communications, e.g. with a gateway and/or cloud server, as described herein. In other examples, transceiver 130 may be replaced by a dedicated transmitter and/or receiver, depending on the requirements of person monitor 100.
  • the person monitor 100 can further comprise an electronic storage device 140 in communication with electronic processor 120. Electronic storage device 140 can be used to store configuration information and other information in electronic memory.
  • a power supply 150 supplies power to the person monitor 100 and its individual components.
  • the power supply 150 can be a battery that supplies all power consumed by the person monitor 100.
  • the power supply 150 can include a wired power connection or may include a back-up battery alongside a wired power connection.
  • the person monitor power supply 150 can comprise a very low power and very high efficiency switching regulator with adjustable regulated voltage output.
  • the output voltage can be dynamically adjusted by electronic processor 120 to provide reduced power consumption during idle times or when those at least one sensors 110 with low-voltage power supply capability are operational.
  • electronic processor 120 may also control power to the at least one sensors 110 that have integrated shutdown capability to reduce consumption to very low levels or control external sensor power supply switches to eliminate power consumption altogether when the at least one sensors 110 are not in use.
  • Person monitor 100 can further comprise a human-machine interface (HMI) 160.
  • HMI 160 human-machine interface
  • the exact configuration of the HMI 160 can depend on the application of person monitor 100. In many examples, the person monitored by person monitor 100 does not need to interact with or understand the function of the person monitor 100, and the person monitor 100 can passively operate in the background without input from the monitored person.
  • HMI 160 can be a basic interface including, for example, one or more LEDs used to indicate a status (e.g. connection status, power status, etc.) of person monitor 100.
  • HMI 160 can be more sophisticated and may include one or more interfaces (e.g. buttons) that allow the monitored person or other personnel to actively interact with person monitor 100.
  • Person monitor 100 is typically installed in a building or facility where the monitored person resides, and is typically fastened to a wall or ceiling of a room.
  • the person to be monitored is an elderly person living independently, then the person monitor 100 can be installed on a wall of a room of the house of the elderly person.
  • the person to be monitored may be a person recovering from surgery or in post-operative care in a healthcare facility, and the person monitor 100 may be installed on a wall of a room of the healthcare facility.
  • Two or more person monitors 100 may be installed in the relevant location depending on e.g. the size of the house and/or the number of rooms.
  • person monitor 100 can be configured for simple and toolless installation (e.g. using an adhesive backing without requiring screws), particularly in examples where power supply 150 is a battery.
  • the at least one sensor 110 can comprise an audio sensor, such as a microphone.
  • the audio sensor can be configured to measure ambient sound levels (such as power, pressure, or intensity levels). Measuring ambient sound levels can help preserve the privacy of the monitored person, as the sound levels can be independent of the actual content of the sound.
  • Subsequent processing of the raw audio sensor data can provide insights into events, lack of events, periodicity and timing of such events and their sound level attributes within the local audio environment. These insights can be used to help determine emerging trends in the monitored person, detect deviations in established patterns, or to trigger alerts, as described in more detail herein.
  • the at least one sensor 110 can additionally or alternatively comprise a pressure sensor configured to measure the absolute pressure of the ambient environment. Atmospheric pressure data can be used by person monitor 100 (or associated backend/cloud services) to increase the accuracy of other measurements via sensor fusion, and/or can provide additional functionality in combination with other sensors via sensor fusion. Ambient pressure measurements acquired by the pressure sensor can also be compared with external atmospheric pressure measurements.
  • the at least one sensor 110 can additionally or alternatively comprise a temperature sensor used to measure and report ambient temperature.
  • the at least one sensor 110 can additionally or alternatively comprise a humidity sensor (e.g. an H2O sensor).
  • the humidity sensor can be configured to measure local humidity within the vicinity of person monitor 100. Data from the humidity sensor can be combined or fused with other sensor data (e.g. temperature data) to infer overall personal comfort levels of the monitored person. Local humidity and temperature data from the at least one sensor 110 can also be compared with external humidity and temperature data, as described in more detail herein.
  • the at least one sensor 110 can additionally or alternatively comprise an ambient light sensor.
  • the ambient light sensor can be configured to sense ambient visible light and/or invisible light, such as infrared radiation.
  • the ambient light sensor can sense and output a measured light level which can be used to assist in location monitoring and/or presence detection (either in isolation or in combination with other sensor data via sensor fusion), and can also be used as an input to higher- level algorithms via sensor fusion.
  • the at least one sensor 110 can additionally or alternatively comprise a CO2 sensor configured to measure ambient CO2 levels within the proximity of person monitor 100. Measurements of ambient CO2 levels about person monitor 100 can be used for location monitoring and/or presence detection, determination of the number of personnel within a room, and to infer movement events or movement of personnel, as described in more detail herein.
  • the at least one sensor 110 can additionally or alternatively comprise a volatile organic compound (VOC) sensor.
  • VOC volatile organic compound
  • VOCs can affect personal comfort and overall air quality. Measurement of VOC levels can be an effective way to detect changes to air quality and detect detrimental environmental changes overall. Analysis of the trends in certain VOCs can further be used to recognise or detect the onset of certain health conditions or health changes in the monitored person. For example, trends in certain VOC markers can indicate the onset of certain types of cancer. Measurement and analysis of VOCs within the environment can also be used to determine or infer a state or activity of the monitored person. For example, VOC levels and/or changes to measured VOC levels can indicate that the monitored person is cooking, cleaning, or moving through the building.
  • the VOC sensor can be configured to output an indication of VOC levels without particular selectivity between VOCs. In other examples, the VOC sensor can be configured to discriminate between VOCs and to measure different levels of different VOCs.
  • the at least one sensor 110 can additionally or alternatively comprise a particulate matter sensor, such as a PM2.5 sensor.
  • a particulate matter sensor such as a PM2.5 sensor.
  • PM content can be measured across a range of particle sizes.
  • the at least one sensor 110 can include any one or more of the above example sensors in any combination, in addition to other sensors such as NO2 sensors, oxygen sensors, airflow sensors, vibration sensors, and chemical sensors amongst others.
  • Figure 2 illustrates an example network 200 comprising at-home healthcare device 205, person monitor 210, gateway 220, cloud server 230, carer group 240, emergency services 250, and external database
  • person monitor 210 is configured to bi-directionally communicate with gateway 220 via a transceiver.
  • Person monitor 210 is typically configured for wireless communication with gateway 220 and/or cloud server 230, although may be configured for wired communication in other examples.
  • the transceiver of person monitor 210 can be configured to communicate using a number of different possible modalities and protocols.
  • Gateway 220 can be similarly configured for bi-directional communication with cloud server 230 and can also comprise a transceiver. These communications can be wireless or through one or more wired connection.
  • gateway 220 can include processing capabilities (through the use of e.g. physical or virtual electronic processors) or other functionalities.
  • gateway 220 can be configured only to relay communications.
  • gateway 220 can act as a gateway for multiple person monitors 210, 212, and 214.
  • Cloud server 230 can comprise virtual or physical processors that can be used to process data originating from person monitor 210 via gateway 220.
  • person monitor 210 can directly forward data originating from its associated sensor(s) to a processor of cloud server 230 for processing, without an intervening gateway.
  • Cloud server 230 can also process data that has been at least partially processed by the electronic processor of person monitor 210 and/or gateway 220. This can reduce the requirements of the electronic processor on board person monitor 210.
  • Cloud server 230 can also comprise electronic storage (e.g. electronic memory) for storing data received and/or processed by cloud server 230.
  • Cloud server 230 may also be in bi-directional communication with additional gateways, e.g. gateway 222, as part of a wider network servicing multiple person monitors. These additional gateways may be associated with additional person monitors (not depicted).
  • the example network 200 includes at-home healthcare device 205.
  • Person monitor 210 can be configured to communicate with at-home healthcare device 205 to receive data therefrom and to relay this data to cloud server 230. Data from the at-home healthcare device 205 can be used by the person monitor 210 or associated cloud server 230 to determine a state of the monitored person, as described herein.
  • the healthcare device 205 may be, for example, a blood pressure monitor or blood glucose monitor.
  • Healthcare device 205 may communicate with person monitor 210 using, for example, Bluetooth Low Energy, although other communication protocols can be used in other examples.
  • carer group 240 can communicate with cloud server 230 and person monitor 210.
  • Carer group 240 can comprise a group of carers for the monitored person.
  • the monitored person is an elderly person living independently, then carer group 240 may comprise family members of the monitored person.
  • carer group 240 may comprise doctors or other healthcare professionals.
  • individuals within carer group 240 may have different levels of authorisation or access to data, as described in more detail herein. Similarly, the membership of carer group 240 and their associated level of authorisation may be dictated by the monitored person.
  • the carer group 240 may be able to communicate directly with person monitor 210 or may send or receive information through gateway 220 and/or cloud server 230, depending on the configuration of network 200 and its constituent components.
  • cloud server 230 can be configured to issue alerts to carer group 240 based on data received from person monitor 210. These alerts can relate to health insights regarding the monitored person, changes or trends in their behaviour or condition, and/or can relate to acute emergencies or situations requiring intervention or welfare checks.
  • carer group 240 can also voluntarily access data stored or processed by cloud server 230 for review, without the issuance of an alert.
  • carer group 240 may have access to data stored on cloud server 230 so they can analyse trends or patterns as recorded by person monitor 210 of their own volition.
  • Carer group 240 can communicate with person monitor 210, gateway 220, and/or cloud server 230 through e.g. personal devices such as person computers, cell phones or smartphones, tablets, and other electronic devices.
  • carer group 240 may only communicate with cloud server 230, and may not be able to directly communicate with person monitor 210.
  • person monitor 210 may only produce raw data or data that is not easily interpreted without processing or formatting which may take place at cloud server 230.
  • cloud server 230 is in communication with emergency services 250.
  • emergency services 250 can also comprise health service providers (e.g. doctors) in some examples.
  • Example network 200 further includes an external database 260 in communication with cloud server 230.
  • External database 260 can comprise external information such as external weather information (e.g. pressure, temperature, humidity data, precipitation, and/or wind) not directly measured by person monitor 210.
  • Other external information can include e.g. television schedules, calendar information relating to external events, and other forms of information that may not directly derive from person monitor 210, but may be used to detect trends or events relating to the monitored person, as described in more detail herein.
  • network 200 is only an example, and other networked person monitors and systems can be configured differently.
  • the network may not include a gateway, and the person monitor may directly communicate with a cloud server.
  • the person monitor can also be configured for communication with both the gateway and the cloud server.
  • person monitor 210 can be configured to communicate with gateway 220 via RF communications and may be configured as a LPWAN.
  • Gateway 220 can also communicate with cloud server over a WAN or LPWAN in some examples, also using e.g. RF communications.
  • Other known communication protocols or architectures can be used in other examples of networked person monitors.
  • the person monitors disclosed herein are configured to determine the state of the monitored person and to detect long-term trends in their health or wellness, detect medium- to short-term events that may benefit from or require intervention, and to detect acute emergencies or events that may require immediate assistance. In some cases, these trends and/or events can be detected by deviations in learned patterns of the monitored person, or from other cues independent of learned patterns. In other cases, trends and/or events can be detected with reference to established patterns that are independent of the learned patterns of the monitored person. As described in more detail herein, the detection of deviations from established patterns or other cues can be made using local processing of sensor data directly at the person monitor, and/or using offsite processing at a networked cloud server or gateway.
  • the disclosed person monitors and/or associated cloud servers can be configured to learn one or more patterns of the monitored person using one or more pattern machine learning models operating on an electronic storage device of the person monitor and/or associated cloud server.
  • Data which is acquired from the at least one sensor of the person monitor can be processed to infer the monitored person's states, actions, activities, and other characteristics, as described in more detail herein. These inferred states and other variables can then be used as inputs to the pattern machine learning model to construct patterns of the monitored person.
  • Figure 3 depicts an example system for learning a pattern of a monitored person.
  • Person monitor 310 is installed in a residence of the monitored person and communicates with cloud server 320, potentially via a gateway (not depicted).
  • Cloud server 320 hosts pattern machine learning module 330.
  • the pattern machine learning model 330 can be at least partially pre-trained based on existing data 340 stored within cloud server 320.
  • existing data 340 may be stored in an external database.
  • the existing data 340 can include statistical averages or statistical patterns for people that have the same or similar characteristics as the monitored person, such as age, gender, socioeconomic group/status, area of living, lifestyle, etc.
  • the existing data 340 may describe, for example, one or more patterns comprising waking/sleeping times, eating/hygiene cycles, activity schedules, and other daily activities that represent statistical averages for the monitored person's cohort.
  • the pattern machine learning model 330 which is pre-trained using existing data 340 can then be adjusted and refined over an adjustment period using raw and/or processed sensor data from the person monitor 310 to customise the pattern machine learning model 330 for the monitored person's patterns. For example, data acquired by the sensors of the person monitor 310 can be processed to determine activities and states of the monitored person. These can be correlated with time and used to learn the monitored person's pattern.
  • the person monitor 310 may not be provided with training data 340 or a pre-trained machine learning model, and the pattern machine learning model 330 may be trained exclusively using data acquired by the person monitor 310 over a training period.
  • the pattern machine learning model 330 used by the person monitor and associated cloud server 310 or backend systems can continuously learn and adjust to changes in the monitored person's patterns.
  • the pattern machine learning model 330 can be a recurrent neural network, such as a long short-term memory network. This can allow the pattern machine learning model 330 to update its learned patterns and to discern between deviations from established patterns and gradual changes to established patterns.
  • pattern machine learning model 330 can implement other kinds of recurrent neural networks or machine learning architectures.
  • the patterns of the monitored person as learned by the pattern machine learning model 330 can relate to activities, such as cooking, eating, watching television, showering, toileting, etc.
  • the patterns can further relate to states, such as awake, asleep, at rest, active, etc. Patterns including these activities and/or states can be expressed in terms of a schedule with expected start times and/or end times.
  • the pattern machine learning model 330 can learn multiple patterns that occur over different time scales. For example, while the monitored person will often have a pattern over a 24-hour cycle, they may also have patterns that occur over larger timescales, such as weekly or monthly timescales, and the pattern machine learning model 330 can be configured to learn these patterns. For instance, the monitored person may leave their accommodation every Sunday morning to attend church, and the pattern machine learning model 330 may establish a weekly pattern including their absence from the house every Sunday morning. Patterns over still larger timescales can also be detected and learned, such as patterns corresponding with seasons of the year (for example, spending more time outside during summer months). For these larger timescale patterns, some amount of pre-training of pattern machine learning model 330 can be useful if the amount of time needed to build up sufficient training data is significant.
  • timescales such as weekly or monthly timescales
  • the monitored person can behave in patterns that occur over shorter timescales, such as hours, and the pattern machine learning model 330 can be configured to learn these patterns. For instance, certain monitored persons may have very well-defined routines such as hygiene cycles. These routines may be expressed as patterns learned by the pattern machine learning model 330 or as sub-patterns that comprise part of a larger pattern learned by the pattern machine learning model 330.
  • Example health or wellbeing patterns can include sleep patterns, morning vigour patterns, absence patterns (e.g. away-ness from home), social engagement patterns, and food preparation patterns amongst others.
  • the person monitor 310 monitors the person to detect deviations or variations from the learned patterns.
  • Data sensed by the one or more sensors of person monitor 310 can be processed at the person monitor 310 and/or cloud server 320 (and/or any intervening gateway) to determine the person's state or activity. This state and/or activity can then be used as an input for the pattern machine learning model 330.
  • Detected deviations or variations from learned patterns can be used to indicate long-term trends in the monitored person, medium-term trends that require or may benefit from intervention by carers, or acute episodes requiring an immediate response (e.g. by emergency personnel).
  • pattern machine learning model 330 hosted on cloud server 320, this may not be the case in other examples.
  • pattern machine learning 330 may be hosted on person monitor 310 in other examples, without the need for person monitor 310 to communicate with cloud server 320 to learn patterns.
  • pattern machine learning model 330 may be hosted on a gateway in the network, or distributed across multiple devices in a network.
  • Figure 4 depicts an example method of detecting trends in a monitored person.
  • Data is acquired from at least one sensor of a person monitor at 410.
  • the data may optionally be processed at 420, either using a processor of the person monitor or an offsite processor (e.g. on the associated cloud server.)
  • the sensor data acquired at 410 and/or processed at 420 is compared with the pattern as learned by pattern machine learning module at 430 to determine if the sensor data (and/or processed sensor data) represents a deviation from the learned pattern of the monitored person. If a deviation is not detected, then the learned pattern may be reinforced at 450 or no action may be taken. If the sensor data indicates a gradual change in the learned pattern, the pattern may be updated at 460. If a deviation is detected, then the sensor data may be analysed to detect trends or events at 470. Additionally or alternatively, if an emergency or event is detected then an alert may be issued at 480.
  • sensor data acquired at 410 and processed at 420 may suggest that the monitored person is going to bed at the usual time, but is falling asleep later and later in the night.
  • This data can be compared against the person's usual sleep patterns as learned by the pattern machine learning module at 430 and a trend identified at 470.
  • a carer group can then be informed that the monitored person is suffering from insomnia based on the trend identified at 470.
  • the detected trend of insomnia may be analysed at 470 in other combinations with other identified trends in order to identify trends or health conditions that are responsible for the insomnia.
  • the insomnia detected from sensor data may be fused with other identified trends to determine that the monitored person is suffering from the onset of dementia.
  • the acquired/processed sensor data may suggest that the monitored person is restless in the night during specific durations or stages of their sleep, or may be restless during times at which they typically tend to be asleep. These trends may be analysed at 470 to determine that the monitored person is suffering from the onset of respiratory unwellness.
  • the sensor data that is acquired at 410 and/or processed at 420 can additionally be compared to one or more reference patterns at 440.
  • the one or more reference patterns can be used to detect emerging patterns that are indicative of health trends or episodes requiring intervention, but without reference to the monitored person's patterns of living as learned by the pattern machine learning model. Further analysis to detect trends or events can take place at 470 depending on the comparison, or an alert can be issued at 480. If the comparison at 440 does not indicate anything out of the ordinary, the reference pattern referenced at 440 does not necessarily need to be changed or updated.
  • the sensor data acquired at 410 and/or processed at 420 may suggest that the monitored person is toileting more and more frequently.
  • These behaviours can be compared to the learned pattern at 430 to detect deviations therefrom, but can also be compared to a reference pattern at 440 which represents general patterns that people exhibit when suffering from certain health conditions, such as problems with kidney function, voiding proportion, or other urological conditions that could benefit from early treatment or intervention. If this analysis suggests that the behaviour of the monitored person coincides with a pattern of reduced kidney function or urological pathologies - irrespective of the person's usual habits represented by their learned pattern - then the person monitor and/or cloud server may issue an alert at 480 or analyse for trends at 470.
  • the reference pattern used at 430 is not personalised for the monitored person, it may be statistically weighted based on the monitored person's characteristics (e.g. age, sex, etc.)
  • the example method depicted in Figure 4 shows the possibility of issuing an alert after comparing sensor data to a learned pattern or reference pattern.
  • the person monitor and/or cloud backend may be configured to issue an alert based on raw or processed sensor data (e.g. data from 410 or 420), without reference to any particular pattern, if emergency cues are met.
  • Example cues include audible calls for help or pre-defined audio criteria as detected by an audio sensor of the person monitor, as further described herein.
  • the processing of sensor data (as at 420 with respect to Figure 4) originating from the person monitor can be substantially or entirely handled offsite, such as at an associated cloud server.
  • analysis of raw or processed sensor data to e.g. determine variations from established patterns can also be substantially or entirely handled offsite. This can drastically reduce the requirements for the local electronic processor on board the person monitor, which can allow for person monitors with very low power consumption.
  • a person monitor 100 can be configured to monitor a person, and can comprise at least one sensor 110, an electronic processor 120 in communication with the at least one sensor 110, a transceiver 130 in communication with the electronic processor 120, and an electronic storage device 140 in communication with the electronic processor 120.
  • the power supply 150 can be a battery, and the person monitor 100 can be configured so that power consumed by the person monitor 100 is provided by the battery of the person monitor 100.
  • the power consumed by the person monitor 100 can be no greater than 1 mW.
  • the power consumed by the person monitor may be no greater than 1 mW over a 1 second burst.
  • the power consumed by the person monitor may no greater than 1 mW over four seconds per hour.
  • the battery can supply power to the person monitor 100 for up to 6 to 10 years depending on how person monitor 100 is configured.
  • the person monitor 100 can be a component of a system including a cloud server.
  • the cloud server can be in communication with the person monitor via the transceiver of the person monitor, as described herein.
  • the person monitor 100 can be configured to have an idle power consumption no greater than 50 pW.
  • the hardware of the person monitor must be chosen and operated appropriately.
  • the electronic processor 120 can be a low-power processor such as a 32-bit ARM® Cortex®-M4F.
  • the transceiver 130 of the person monitor 100 can be configured to communicate with cloud server using LPWAN.
  • LPWAN can be particularly suitable for low-power examples of person monitor 100 as it combines excellent data transfer rates, good range, and micropower requirements.
  • the transceiver 130 can consume no more than 50 mA of current for 5 ms during transmission or reception.
  • Other low-power communication protocols can also be used.
  • Communications can also be scheduled with the associated cloud server in order to not exceed maximum power requirements for person monitor 100.
  • the person monitor 100 can be scheduled to communicate to provide delivery of meaningful information while still operating within its low-power requirements.
  • the electronic storage device 130 of the person monitor 100 can comprise sensor configuration information comprising a sensor activation state and/or a sensor acquisition rate.
  • the sensor activation state and acquisition rate within the sensor configuration information can dictate whether a given sensor 110 of the person monitor 100 is activated and its associated rate of data acquisition.
  • the associated cloud server can be configured to dynamically adjust or update the sensor configuration information, as described herein. This can be used to keep power consumption by person monitor 100 below the required threshold.
  • Such a configuration can also allow for the person monitor to use standard sensors that can have high sensitivity and high resolution, or may not be configured for ultra-low power consumption, without exceeding the overall power consumption requirements of person monitor 100.
  • Figure 5 depicts an example system comprising a person monitor 500 and a cloud server 550.
  • the person monitor 500 is configured to monitor a person and comprises at least one sensor 510, a transceiver 530 in communication with cloud server 550, an electronic storage device 540, and an electronic processor 520 in communication with the at least one sensor 510, the transceiver 530, and the electronic storage device 540.
  • the electronic storage device 540 comprises sensor configuration information 542 comprising a sensor activation state 544 and/or a sensor acquisition rate 546.
  • the at least one sensor 510 is configured to acquire data at a rate based at least partially on the sensor acquisition rate 546 and/or is activated or deactivated based at least partially on the sensor activation state 542.
  • Cloud server 550 is configured to update the sensor configuration information 542 stored within electronic storage device 540.
  • the electronic processor 520 of the person monitor 500 is configured to receive, from the cloud server 550 via the transceiver 530, updated sensor configuration information; update the sensor configuration information 542 within the electronic storage device 540 with the updated sensor configuration information; and adjust a sensor activation state 544 and/or sensor acquisition rate 546 based at least partially on the updated sensor configuration information.
  • the cloud server 550 can monitor the data stream output from the at least one sensor 510 of the person monitor 500 to determine if the at least one sensor 510 sensor needs to be activated at the present time and, if so, what its data acquisition rate should be. This can reduce power consumption of the person monitor 500 by reducing the rate at which data is acquired by the at least one sensor 510, or by deactivating the at least one sensor 510 entirely if its output is not required.
  • the person monitor 500 is an ultra-low power person monitor 500, it should be understood that other person monitors without ultra-low power requirements may also be configured to receive updated sensor configuration from an associated cloud server.
  • the cloud server 550 can be configured to receive sensor data from the person monitor 500, generate updated sensor configuration information based at least partially on the received sensor data, with the updated sensor configuration information 542 comprising an updated sensor activation state and/or at least one updated sensor acquisition rate, and push the updated sensor information to the person monitor 500.
  • Factors that can influence changes or updates to the sensor activation state 544 and/or sensor acquisition rate 546 can include detected deviations from established patterns or expected behaviour.
  • the person monitor 500 can continuously locally analyse behaviour modes of the monitored person and can provide information to the cloud server.
  • the cloud server can then detect changes in patterns and can increase the sensor configuration information if the deviation is considered to be of interest.
  • the cloud server 550 may use one or more machine learning models to determine whether certain deviations or sensed data are of interest and to adjust sensor information accordingly.
  • the cloud server 550 can be configured to reduce acquisition rates or deactivate the at least one sensor 510 entirely, in some situations, the cloud server 550 may determine that more granular measurements from the at least one sensor 510 are required, and may increase the sensor acquisition rate 546 accordingly. The cloud server 550 can also determine that a certain measurement or determination based on some sensor data is ambiguous, and sensor fusion from an additional sensor is required to resolve the ambiguity. In these situations, cloud server 550 may activate the at least one sensor 510 in order to acquire the data necessary to resolve the ambiguity.
  • the power consumed by the person monitor 500 may be no greater than 1 mW.
  • the power consumed by the person monitor 500 may be supplied exclusively by a battery on-board the person monitor 500.
  • the at least one sensor 510 can comprise a SHT40-AD1B-R3 temperature and relative humidity sensor as manufactured by Sensirion.
  • the at least one sensor can comprise a APDS-9250 light sensor as manufactured by Broadcom.
  • the at least one sensor 510 can comprise a SN-GCJA5L PM2.5 sensor as manufactured by Panasonic.
  • the sensor acquisition rate 546 can be 50 milliseconds per reading in bursts of measurements in periodic intervals.
  • the person monitors and/or associated cloud servers described herein can be configured to determine the state of a monitored person based at least partially on sensor data. Determining and tracking the different states of the monitored person can be used to detect deviations from patterns of behaviour and changes or trends, such as long-term health trends, for the monitored person.
  • a person monitor can determine that a monitored person is sleeping based on CO2 measurements and audio measurements. Other data, such as the time of day and the monitored person's established patterns, can also be used as inputs for the determination.
  • the person monitor can further be configured to discriminate between different kinds of sleep, such as restful sleep or restless sleep, based on sensor fusion.
  • An audio sensor of the person monitor may indicate that there is low audible activity nearby the monitored person, and this data can be fused with data from a CO2 to indicate that the person is sleeping.
  • a low CO2 reading can indicate that the monitored person is in a restful sleep, while a comparatively higher CO2 reading can indicate a restless sleep.
  • the state of a monitored person may be ambiguous given a set of sensor data from the person monitor. While sensor fusion can be used to resolve these ambiguities in many cases, this may be insufficient in some situations.
  • the person monitor and/or its associated cloud server can be configured to use rules-based logic to determine the state of a monitored person. Because certain states are causally related to other states, knowledge of a preceding state of the monitored person can restrict the number of possible current states for the monitored person. This can be used to help resolve potential ambiguities in the determined state of the monitored person.
  • rules-based logic dictates that an 'awakening' state must necessarily follow a 'sleeping' state, as the monitored person can only awaken if they were previously asleep.
  • Sensor data from a person monitor may suggest that the monitored person could be in one of several possible states, at least one of which includes awakening. However, if the monitored person was recently determined to be awake without an intervening sleeping state, then the 'awakening' state can be ruled out (or at least considered very unlikely) from this list of possible states.
  • Figure 6 depicts an example person monitor 600 configured to determine the state of a monitored person.
  • the person monitor 600 comprises at least one sensor 610, an electronic processor 620 in communication with the at least one sensor 610, and an electronic storage device 640 configured to store at least one previously determined state 642 and associated time of determination 644.
  • the electronic processor 620 can be configured to determine a state of the monitored person based at least partially on an output of the at least one sensor 610 and the at least one previously determined state 642 and associated time of determination 644 stored within the electronic storage device 640.
  • Rules-based logic can be used to help determine the possible states that the monitored person may be in given the at least one previously determined state 642.
  • the associated time of determination 644 indicates how recently the state 642 was determined and can be used to weight the probability of possible states for the monitored person. For example, if the monitored person was previously determined to be awake and the associated time of determination 644 was only seconds prior, then the current state of the monitored person is very unlikely to be awakening, as it is very unlikely that the person fell asleep in the intervening time. Conversely, if the associated time of determination 644 was 12 hours prior, then the influence of the at least one previously determined state 642 can be weighted accordingly, as the monitored person's state may have varied considerably over the past 12 hours.
  • the rules dictating how states are causally related to one another can be codified in state rules 646, which is stored in the electronic storage 640 of the example person monitor 600 depicted in Figure 6.
  • state rules 646 may be stored offsite (e.g. on an associated cloud server) and may be accessed by the person monitor via a transceiver as required. State rules 646 can be pre-defined or can be learned through e.g. machine learning. In still further examples, state rules 646 may be stored on an electronic storage device of a cloud server, with the processor of the cloud server configured to determine the state of the monitored person.
  • the at least one previously determined state 642 used to determine the monitored person's present state is the most recently determined state for the monitored person.
  • the associated time of determination 644 can be the most recent time stored within the electronic storage device 640 of the person monitor 600 (or electronic storage device of the cloud server, as the case may be.)
  • the list of determined states can include awakening, falling asleep, and sleeping. In further examples, the list of determined states can be more granular and can include broken sleeps with awake periods, out-of-bed periods, and awake-in-bed durations at the end of the night.
  • a backend service e.g. a cloud server
  • the person monitor 600 may include a transceiver configured to communicate with an associated cloud server.
  • the cloud server can include an electronic processor and electronic storage device configured to store at least one previously determined state and associated time of determination.
  • the electronic processor of the cloud server can be configured to determine a state of the monitored person based at least partially on the at least one previously determined state and associated time of determination stored within the electronic storage device of the cloud server, in addition to sensor data from the at least one sensor 610 received via the transceiver of person monitor 600.
  • the states of the monitored person can also be determined using data derived from at-home healthcare devices such as blood pressure monitors, glucose monitors, weigh scales, and dialysis machines. For example, data from these at- home healthcare devices can be received by the person monitor using Bluetooth Low Energy or other communication protocols.
  • a person monitor can be configured to monitor a person and can comprise at least one sensor, an electronic processor in communication with the at least one sensor, and a receiver configured to receive data from at least one healthcare device.
  • the electronic processor of the person monitor can be configured to determine a state of the monitored person based at least partially on an output of the at least one sensor and the data received from the at least one healthcare device.
  • the person monitor can include a transceiver configured to communicate with a cloud server.
  • the cloud server can include an electronic processor configured to determine a state of the monitored person based at least partially on the output of the at least one sensor and the data received from the at least one healthcare device via the transceiver of the person monitor.
  • the person monitors and/or associated cloud servers disclosed herein can be configured for presence detection and/or to determine the number of personnel within a room, as further described herein. Additionally, some person monitors can be further configured to determine a location of the monitored person.
  • the granularity or specificity of the monitored person's location as determined by the person monitor can vary depending on how the person monitor is configured.
  • the person monitor can be configured to determine the monitored person's location at the level of different rooms within the building housing the monitored person. For example, the location of the monitored person may be determined at the level of the monitored person's bedroom, bathroom, or living room, without an estimation of where exactly the monitored person is within a given room.
  • the determined location may be more quantitative or granular. For example, the determined location may be expressed using more specific areas or quadrants within rooms, and/or may include an estimated coordinate with e.g. one or more confidence intervals.
  • the physical layout of the building housing the monitored person imposes restrictions on how the monitored person can move from location to location. For example, if the building includes a plurality of rooms, then the rooms may be physically laid out so that only some rooms are accessible from other certain rooms - e.g. the person may not be able to move from their bedroom to their bathroom without traversing through a hallway. Similarly, as the monitored person can only move through the building with a finite speed, there is a limit on how far they can feasibly move from a location within a given timeframe.
  • Figure 7 depicts an example person monitor 700 configured to determine a location of a monitored person.
  • the person monitor 700 comprises at least one sensor 710, an electronic processor 720 in communication with the at least one sensor 710, and an electronic storage device 740 configured to store at least one previously determined location 742 and associated time of determination 744.
  • the electronic processor 720 can be configured to determinate a location of the monitored person based at least partially on an output from the at least one sensor 710 and the at least one previously determined location 742 and associated time of determination 744 stored within the electronic storage device 740.
  • the person monitor 700 can be provided with or can reference location information 746 which codifies the logical relationship between different possible locations.
  • location information 746 is stored within electronic storage device 740.
  • location information 746 may not be stored locally and may be stored on an associated cloud server and provided as needed.
  • Location information 746 may also be stored in the associated cloud server in examples where the person monitor 700 performs minimal to no data processing, with the location of the monitored person (and any associated time of determination) being determined by the cloud server in communication with person monitor 700.
  • the at least one previously determined location 742 used to determine the monitored person's present location is the most recently determined location for the monitored person.
  • the associated time of determination 744 can be the most recent time stored within the electronic storage device 740 of the person monitor 700 (or electronic storage device of the cloud server, as the case may be.)
  • the person monitor 700 can further be configured to determine the location of the monitored person within a building comprising a plurality of rooms, with the location of the monitored person as determined by person monitor 700 corresponding to a room of the plurality of rooms.
  • the person monitor 700 can refer to location information 746 to determine which rooms are accessible (or likely to be accessible) and which are inaccessible (or likely to be inaccessible) from the room corresponding to the most recently determined location of the monitored person over the relevant timeframe (e.g. using associated time of determination 744), and can include or exclude potential rooms on this basis.
  • the remaining list of possible rooms/locations can then be used to resolve any ambiguities in the location of the monitored person.
  • the determined location may be a room that is adjacent to the previously determined location.
  • the person monitor 700 can exclude or include possible locations based on the previously determined locations 742 and associated times of determination 744. For example, given a list of potential locations that the monitored person may be in, the person monitor 700 can be configured to categorise these as either 'possible' or 'not possible'. In other examples, the person monitor 700 can be configured to determine a probability to the possible locations. This may be qualitative (e.g. 'likely' or 'unlikely') or quantitative (e.g. 37%). These determined probabilities can then be fed to e.g. a location determination classifier in addition to raw or processed sensor information in order to determine the location of the monitored person.
  • a location determination classifier in addition to raw or processed sensor information in order to determine the location of the monitored person.
  • the location information 746 may be pre-defined and provided directly to the person monitor 700 (and/or associated cloud server). For example, if the monitored person resides in their house, the house may be mapped and represented in the form of a mathematical graph, with nodes representing rooms and edges between nodes representing physically accessible routes linking rooms together, potentially with edge weights corresponding to distances between rooms.
  • the location information 746 can be provided as complete information at the time of installation, without the need for person monitor 700 (and/or associated cloud server) to learn or determine the layout of the building itself.
  • the person monitor 700 (and/or associated cloud server) can also be provided with pre-trained presence detection classifiers (or data for training presence detection classifiers) to correlate sensor data to the specified rooms laid out in location information 746.
  • location information 746 can be at least partially learned by the person monitor 700 and/or associated cloud server.
  • person monitor 700 may gather data over a training period to determine occupancy, the number of personnel within rooms, and/or movement events. This data can be assessed using machine learning models to build up location information 746.
  • the person monitor 700 (and/or associated cloud server) can use unsupervised machine learning models, such as k-means clustering, to identify rooms or different locations that the monitored person moves between. Spatial relationships between the different locations identified via machine learning can also be learned and codified within location information 746.
  • location information 746 may be constructed by person monitor 700 and/or associated cloud server using supervised training with labelled data.
  • location information 746 can be constructed using a combination of different approaches.
  • person monitor 700 or its associated cloud server can be trained to determine the state or activity of a person and can determine the location or position of the monitored person using presence detection, as disclosed herein.
  • Unsupervised learning can be used to identify locations frequented by the monitored person, and those locations can be contrasted against determined states and activities. These determined states and activities can then be used to label the locations identified using machine learning. For example, if the person is consistently determined to be asleep in a certain location, then the person monitor 700 or associated cloud server can associate that location with a bedroom. Similarly, if the person is consistently determined to be cooking in a certain location, then the person monitor 700/cloud server can associate that location with a kitchen.
  • location information 746 can comprise a range of possible or feasible speeds for the monitored person. These can be provided directly (using e.g. statistical speeds for the cohort of the monitored person) or can be assessed or learned by through measurements.
  • the person monitor 700 may perform minimal or no data processing, and the location of the monitored person may be determined by a processor of a server such as a cloud server in communication with the person monitor 700.
  • person monitor 700 can include a transceiver configured to communicate with an associated cloud server.
  • the cloud server can include an electronic storage device configured to store at least one previously determined location and associated time of determination and an electronic processor.
  • the electronic processor of the cloud server can be configured to determine a location of the monitored person and associate the determined location with a time of determination based at least partially on the at least one previously determined location and associated time of determination stored within the electronic storage device of the cloud server, in addition to sensor data from the at least one sensor 710 received via the transceiver of the person monitor 700.
  • Person monitors disclosed herein can be configured to detect the presence of the monitored person(s) based on sensor data.
  • a person monitor can be configured to detect a person's presence based at least partially on sensor data from at least one CO2 sensor and at least one H2O sensor.
  • the CO2 sensor data and H2O sensor data can be fused for presence detection.
  • the person monitor (and/or associated cloud server) can further be configured to compare the CO2 sensor data and H2O sensor data with external CO2 and/or H2O information (or other weather information).
  • the quantity and location of H2O and CO2 sources is measurable as a function of the emission from the same sources, the detected state of ventilation within the building, outdoor or external H2O and CO2 values, indoor H2O and CO2 measurements from at least one location within the home, and the rate of changes (and rate of rate of changes) at monitored locations.
  • This information can be used, for example, to help rationalise other sensor measurements. For instance, if multiple people are inside of a single room, a CO2 sensor of a person monitor may read an abnormally high CO2 level on the assumption that the room is only occupied by the monitored person, and may issue an alert or initiate an emergency response on this basis. However, while the CO 2 measurements may be abnormally high for a single individual, the readings may actually be normal for a room inhabited by multiple people.
  • Figure 8 depicts a person monitor 800 configured to monitor at least one person within a building comprising at least one room.
  • the person monitor 800 comprises at least one CO 2 sensor 812, at least one H 2 O sensor 814, and an electronic processor 820 in communication with the at least one CO 2 sensor 812 and at least one H 2 O sensor 814.
  • the electronic processor 820 is configured to determine a number of people within a room of the building based at least partially on an output of the at least one CO 2 sensor 812 and an output from the at least one H 2 O sensor 814.
  • person monitor 800 further includes a transceiver 830 that can receive information from external database 860.
  • the person monitor 800 can be configured to communicate with external database 860 via intermediaries, such as gateways and/or a cloud server (not depicted).
  • External database 860 can comprise external information such as external weather information (e.g. pressure, temperature, and/or humidity data) not directly measured by person monitor 800.
  • the electronic processor 820 of person monitor 800 can additionally or alternatively be configured to detect a movement event corresponding to a monitored person moving from a first room to a second room, based at least partially on an output from the at least one CO 2 sensor 812 and an output from the at least one H 2 O sensor 814.
  • the person monitor 800 can evaluate the precise time at which a proximal CO 2 source has ceased to be in one part of the monitored home and has e.g. moved to another part of the home, with the precise timing of the movement being discernible from the CO 2 measurements.
  • sensor signals relating to H 2 O changes are fused in the processing of the person monitor (and/or associated cloud server), then accurate states relating to the movement of people within the building can be attributed, in addition to their vigour and activity.
  • the person monitor 800 may perform minimal or no data processing, and the determination of the number of people within the room can be determined by a processor of a server such as a cloud server in communication with the person monitor 800.
  • person monitor 800 can include a transceiver configured to communicate with an associated cloud server.
  • the cloud server can include an electronic processor.
  • the electronic processor of the cloud server can be configured to determine a number of people within a room of the building based at least partially on an output from at least one CO2 sensor 812 and an output from the at least one H2O sensor 814 of the person monitor 800.
  • the electronic processor of the cloud server can additionally or alternatively be configured to detect a movement event corresponding to at least one person moving from a first room to a second room based at least partially on at least one CO2 sensor 812 and an output from the at least one H2O sensor 814 of the person monitor 800.
  • the person monitors disclosed herein can be configured to detect emergency or acute situations and to issue alerts to carers, healthcare personnel, and/or emergency services. In some cases, these situations can be detected as deviations from the monitored person's established pattern. In other cases, the person monitor can be configured to issue alerts after being triggered by certain cues without reference to the monitored person's pattern of living. For example, a person monitor can be configured to issue an alert if triggered by certain sounds or noises detected by an audio sensor of a person monitor. In some examples, the sounds or noises which trigger an alert can be predefined. For example, the person monitor can be configured to issue an alert if the audio sensor of the person monitor detects the sound of glass breaking. In other examples, the noises or sounds that the person monitor is configured to respond to can be determined or at least partially defined by the monitored person. This can allow the monitored person to have agency over the types of sounds or noises that will trigger an alert.
  • Figure 9 depicts an example of a person monitor 900 configured to monitor a person.
  • the person monitor 900 comprises an electronic processor 920 and at least one audio sensor 912 in communication with the electronic processor 920.
  • the person monitor 900 further comprises an electronic storage device 940 configured to store at least one audio criteria 942.
  • the audio criteria 942 is at least partially user-defined by the monitored person.
  • the electronic processor 920 is configured to receive data corresponding to a sound sensed by the at least one audio sensor 912, compare the data with the at least one audio criteria 942, and issue an alert based at least partially on the comparison.
  • the audio criteria 942 may be defined over a recording phase during which audio sensor 912 is activated. Sounds created by the monitored person during the recording phase can then be processed and/or recorded. The characteristics of the recorded sound can then be used to define audio criteria 942.
  • the recording phase can be initiated, for example, by the monitored person interacting with the person monitor 900 via an HMI (not depicted). Although the sound recorded during the recording phase may be preserved as a waveform in some examples, it may also be processed and deconvoluted into e.g. its spectral components.
  • audio criteria 942 can correspond to mechanical sounds such as knocks, scrapes, and bumps.
  • the audio criteria 942 can correspond to a customised sequence of mechanical sounds.
  • the audio criteria 942 can be a sequence of knocks.
  • the monitored person may define the audio criteria 942 by initiating a recording phase and knocking in their desired sequence as audio sensor 912 is activated.
  • audio criteria 942 can correspond to one or more spoken words or vocalised sounds.
  • the monitored person may decide that they want an otherwise innocuous phrase to be used as a trigger phrase for the person monitor to issue an alert.
  • the comparison between the sound sensed by the at least one audio sensor 912 and the at least one audio criteria 942 can be made using a machine learning model.
  • a machine learning model This can operate on electronic processor 920 or may operate on e.g. an associated cloud server.
  • the sound sensed by the at least one audio sensor 912 can be processed and deconvoluted to produce a spectrogram using, for example, a fast Fourier transform.
  • the resulting spectrogram can then be processed using a convolutional neural network to produce a feature map.
  • the feature map can further be used as an input for additional machine learning models, such as a linear classifier. In other examples, other kinds of machine learning models can be used.
  • the comparison made by the electronic processor 920 may not use a machine learning model.
  • Audio criteria 942 can include a number of characteristics relating to the corresponding sound, and these can be used during the comparison by electronic processor 920.
  • the audio criteria 942 can comprise peak intensities with respect to time, average sound pressure levels with respect to time, waveforms, and spectral information. These can be used during the comparison to determine if the sound sensed by the at least one audio sensor 942 corresponds to the audio criteria 942.
  • audio criteria 942 can correspond to multiple sounds so that an alert can be issued if any one of multiple trigger cues are detected.
  • the audio sensor 912 can be a microphone.
  • Different kinds of microphones can be used, including convention microphones and MEMS-based microphones.
  • an associated cloud server can be configured to perform the comparison in other examples.
  • an electronic storage device of the cloud server can store the audio criteria. Data from the at least one audio sensor can then be relayed to the cloud server and an electronic processor of the cloud server can then perform the comparison between the sensed sound and audio criteria.
  • a system for monitoring a person can comprise a cloud server and a person monitor.
  • the person monitor can comprise at least one audio sensor and a transceiver configured to communicate with the cloud server.
  • the cloud server can comprise an electronic processor and an electronic storage device configured to store at least one audio criteria.
  • the electronic processor of the cloud server can be configured to receive data corresponding to a sound sensed by the at least one audio sensor of the person monitor, compare the data with the at least one audio criteria, and issue an alert based partially on the comparison, wherein the at least one audio criteria is at least partially user-defined by the monitored person.
  • person monitors can be configured to issue an alert based on audio cues corresponding to spoken words.
  • person monitors can be configured to issue alerts in response to audible cues that are not defined by the monitored person.
  • a person monitor can be configured to issue an alert if a cry for 'Help!' is heard by an audio sensor of the person monitor.
  • differences between individuals can increase the difficulty in accurately recognising these audible cues. These variations can also increase the difficulties in audio recognition even when the audible cues are at least partially defined by the monitored person.
  • Figure 10 depicts a further example of a person monitor 1000 configured to monitor a person.
  • the person monitor 1000 comprises an electronic processor 1020 and at least one audio sensor 1012 in communication with the electronic processor 1020.
  • the person monitor 1000 further comprises an electronic storage device 1040 configured to store at least one audio signature 1042 and audio machine learning model 1044.
  • the electronic processor 1020 is configured to receive data corresponding to a sound sensed by the at least one audio sensor 1012, compare the data with the at least one audio signature 1042 using the audio machine learning model 1044, and issue an alert based at least partially on the comparison.
  • the audio machine learning model 1044 is at least partially trained using audio training data comprising a sound made by the monitored person.
  • the audio machine learning model 1044 can be at least partially pre-trained using existing audio training data.
  • the existing audio training data can be generic data (e.g. acquired across a very broad cohort of different people) or may be more specific training data relating to the cohort of the monitored person.
  • the training data may be derived from people who have statistically similar characteristics to the monitored person, such as their gender, age, health condition (if this affects their vocalisation), amongst other characteristics.
  • the audio machine learning model 1044 may be trained exclusively using data acquired by the person monitor 1000 over a training period.
  • the audio machine learning model 1044 can passively and continuously operate in the background to learn the monitored person's vocal characteristics, and may continually learn the monitored person's vocal characteristics during e.g. background conversations.
  • the machine learning model 1044 may be trained over a finite training period during which the at least one audio sensor 1012 records spoken audio by the monitored person.
  • the machine learning model 1044 can also be trained using audio data originating from the monitored person that is provided, for example, by associated carers via e.g. a network of smartphones.
  • the audio machine learning model 1044 can comprise a convolutional neural network.
  • the machine learning model 1044 can further comprise a pipeline of machine learning models such as an additional linear classifier that receives the output from a convolutional neural network.
  • the audio signature 1042 can be any number of audio cues which are used to issue alerts.
  • audio signature 1042 can comprise a number of pre-defined spoken words.
  • the audio signature 1042 can comprise at least one spoken word that is at least partially user defined.
  • the audio machine learning model 1044 may be stored on an associated cloud server, with person monitor 1000 sending sensed audio data to the cloud server via at least one transceiver.
  • the electronic processor of the cloud server may also or alternatively perform the comparison using the audio machine learning model.
  • an electronic storage device of the cloud server can store the audio signature. Data from the at least one audio sensor can then be relayed to the cloud server and an electronic processor of the cloud server can then perform the comparison between the sensed sound and audio signature using the audio machine learning model.
  • a system for monitoring a person can comprise a cloud server and a person monitor.
  • the person monitor can comprise at least one audio sensor and a transceiver configured to communicate with the cloud server.
  • the cloud server can comprise an electronic processor and an electronic storage device configured to store at least one audio signature.
  • the electronic processor of the cloud server can be configured to receive data corresponding to a sound sensed by the at least one audio sensor of the person monitor, compare the data with the at least one audio signature using an audio machine learning model, and issue an alert based partially on the comparison, wherein the audio machine learning model is at least partially trained using audio training data comprising a sound made by the monitored person.
  • the audio machine learning model can be stored on the electronic storage device of the cloud server.
  • the person monitors and associated cloud servers disclosed herein can be configured to identify trends and events in or relating to the monitored person at an early stage, thereby allowing for early intervention before an emergency response is required.
  • the monitored person is also able to call for immediate help if needed (either from emergency personnel or their trusted carers) by using hands-free phrases or audio cues that can be personalised or customised by the monitored person, without the need for cumbersome wearable devices.
  • the on-board sensors of the person monitors can be contained entirely within the housing of the person monitor. The use of multiple sensors distributed throughout a building, such as pressure sensors under mattresses or water sensors installed in plumbing, is not required. In some examples, the monitored person does not need to interact with the person monitor or understand its function, and so is not presented with any technological barriers.
  • Processed data and interfered events/trends can be made available to the carer group of the monitored person, allowing the carer to virtually check up on the monitored person to see if everything is as expected.
  • the data and events/trends made available to the carer group can be presented in a way that does not violate the privacy of the monitored person.
  • the monitored person can have control over the membership of their carer group and each carer's respective level of access to data, preserving the privacy of the monitored person and giving them agency over their care.
  • Carers of the monitored can also automatically receive useful and automatic insights into the wellbeing of the monitored person using their personal devices, such as smartphones.
  • the person monitors disclosed herein can be configured for ultralow power usage and can be powered solely using a battery for years at a time.
  • the cloud server associated with the person monitor can be configured to handle all significant data processing to enable the person monitor to operate at very low power without sacrificing accuracy in the processing of sensor data.
  • the person monitors disclosed herein can include electronic processors of different capabilities, particularly depending on the power requirements of the person monitor.
  • the person monitors may be part of a system and may be networked to an associated cloud server, as discussed herein.
  • the person monitors include electronic processors for e.g. handling sensor data, in many instances, the processing capabilities of the person monitors may be limited, with the majority of the processing handled by the associated cloud server.
  • the person monitors and/or cloud servers disclosed herein can comprise electronic computing devices, and the methods disclosed herein can be computer- implemented methods implemented on those electronic computing devices.
  • person monitors and cloud servers comprising electronic processors and electronic storage devices have been disclosed, it should be understood that the person monitors and/or cloud servers can comprise additional components common to electronic computing devices.
  • these can include memory (e.g. a volatile memory such as a RAM) for the loading of executable instructions, the executable instructions defining the functionality that the person monitor or cloud server carries out under control of its electronic processor.
  • there may also be a user interface for user control and may comprise, for example, computing peripheral devices such as display monitors, computer keyboards and the like.
  • cloud server may be a single computer, a single server, or have the functionality performed by a server apparatus distributed across multiple server components connected via a communications network (such as a private LAN, WAN, or public internet.)
  • a communications network such as a private LAN, WAN, or public internet.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Public Health (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Astronomy & Astrophysics (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Optics & Photonics (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Social Psychology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Alarm Systems (AREA)

Abstract

A person monitor with one or more sensors configured to monitor a person and transmit monitoring data to a remote server. The device may have very low power consumption and communicate with a remote server and receive data from a remote server. The server may adjust a sensor activation state and/or sensor acquisition rate based at least partially on received sensor information. The monitor may determine the state of a monitored person based on the output of one or more sensor and/or healthcare device and/or one or more previously determined state. The monitor may be configured to determine the location of a monitored person based on information from one or more sensors and at least one previously determined location. The monitor may include an audio sensor and be configured to issue an alert based at least partially on the comparison between sound data from the audio sensor and user defined audio criteria.

Description

PERSON MONITOR
FIELD
This invention relates to person monitors and systems including person monitors.
BACKGROUND
Person monitors can be used to monitor a person. They may include multiple sensors distributed throughout the monitored person's building.
SUMMARY
According to one example, there is provided a person monitor configured to monitor a person, the person monitor comprising: at least one audio sensor, an electronic processor in communication with the at least one audio sensor, and an electronic storage device configured to store at least one audio signature; wherein the electronic processor is configured to: receive data corresponding to a sound sensed by the at least one audio sensor, compare, using an audio machine learning model, the data with the at least one audio signature, and issue an alert based at least partially on the comparison; wherein the audio machine learning model is at least partially trained using audio training data comprising a sound made by the person.
According to another example, there is provided a person monitor configured to monitor a person, the person monitor comprising: at least one audio sensor, an electronic processor in communication with the at least one audio sensor, and an electronic storage device configured to store at least one audio criteria; wherein the electronic processor is configured to: receive data corresponding to a sound sensed by the at least one audio sensor, compare the data with the at least one audio criteria, and issue an alert based at least partially on the comparison; wherein the at least one audio criteria is at least partially user-defined by the person.
In another example, there is provided a system for monitoring a person, the system comprising: a cloud server, and a person monitor in communication with the cloud server, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, a transceiver in communication with: the electronic processor, and the cloud server; and a battery; wherein: power consumed by the person monitor is provided by the battery of the person monitor, and the power consumed by the person monitor is no greater than 1 mW.
In another example there is provided a system for monitoring a person, the system comprising: a cloud server, and a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, a transceiver in communication with the cloud server, an electronic storage device storing sensor configuration information, the sensor configuration information comprising: a sensor activation state, and/or a sensor acquisition rate; and an electronic processor in electronic communication with the at least one sensor, the transceiver, and the electronic storage device; wherein: the at least one sensor: is configured to acquire data at a rate based at least partially on the sensor acquisition rate, and/or is activated or deactivated based at least partially on the sensor activation state; and the electronic processor is configured to: receive, from the cloud server via the transceiver, updated sensor configuration information, update the sensor configuration information stored within the electronic storage device with the updated sensor configuration information, and adjust a sensor activation state and/or sensor acquisition rate based at least partially on the updated sensor configuration information.
In another example, there is provided a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and an electronic storage device configured to store at least one previously determined state and associated time of determination; wherein the electronic processor is configured to: determine a state of the monitored person, and associate the determined state with a time of determination; wherein the determined state of the monitored person is based at least partially on: an output of the at least one sensor, and at least one previously determined state and associated time of determination stored within the electronic storage device.
In another example, there is provided a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, a transceiver in communication with the electronic processor, an electronic storage device in communication with the electronic processor, and a battery; wherein: power consumed by the person monitor is provided by the battery of the person monitor, and the power consumed by the person monitor is no greater than 1 mW.
In another example, there is provided a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and an electronic storage device in communication with the electronic processor, the electronic storage device configured to store a previously determined location and associated time of determination; wherein the electronic processor is configured to determine a location of the monitored person based at least partially on: an output from the at least one sensor, and at least one previously determined location and associated time of determination stored within the electronic storage device.
In another example, there is provided a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and a receiver configured to receive data from at least one healthcare device; wherein the electronic processor is configured to determine a state of the monitored person based at least partially on an output of the at least one sensor and the data received from the at least one healthcare device.
In another example, there is provided a person monitor configured to monitor at least one person within a building comprising at least one room, the person monitor comprising: at least one CO2 sensor, and at least one H2O sensor; and an electronic processor in communication with the at least one CO2 sensor and at least one H2O sensor; wherein: the processor is configured to determine a number of people within a room of the building, the determination based at least partially on: an output from the at least one CO2 sensor, and an output from the at least one H2O sensor. In another example, there is provided a person monitor configured to monitor at least one person within a building comprising a plurality of rooms, the person monitor comprising: at least one CO2 sensor, at least one H2O sensor, and an electronic processor in communication with the at least one CO2 sensor and at least one H2O sensor; wherein: the processor is configured to detect a movement event corresponding to the at least one person moving from a first room to a second room, the detection based at least partially on: an output from the at least one CO2 sensor, and an output from the at least one H2O sensor.
Further examples can be implemented according to any one of the dependent claims.
It is acknowledged that the terms "comprise", "comprises" and "comprising" may, under varying jurisdictions, be attributed with either an exclusive or an inclusive meaning. For the purpose of this specification, and unless otherwise noted, these terms are intended to have an inclusive meaning - i.e., they will be taken to mean an inclusion of the listed components which the use directly references, and possibly also of other non-specified components or elements.
Reference to any document in this specification does not constitute an admission that it is prior art, validly combinable with other documents or that it forms part of the common general knowledge.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings which are incorporated in and constitute part of the specification, illustrate examples of the invention and, together with the general description of the invention given above, and the detailed description of examples given below, serve to explain the principles of the invention, in which:
Figure 1 illustrates an example person monitor.
Figure 2 depicts an example network including a person monitor.
Figure 3 illustrates an example system including a pattern machine learning model.
Figure 4 depicts an example method of detecting a trend in a monitored person.
Figure 5 depicts an example system comprising a person monitor and a cloud server.
Figure 6 shows an example person monitor configured to determine a state of a monitored person.
Figure 7 depicts an example person monitor configured to determine a location of a monitored person.
Figure 8 depicts a person monitor configured to monitor at least one person within a building comprising at least one room.
Figure 9 depicts an example of a person monitor configured to monitor a person.
Figure 10 depicts a further example of a person monitor configured to monitor a person. DETAILED DESCRIPTION
Person monitors are used to monitor the status and wellbeing of people, particularly people who may be vulnerable such as elderly people living alone, disabled people, or people receiving post-operative care.
Figure 1 illustrates a person monitor 100 according to one example. The person monitor 100 can comprise at least one sensor 110 and an electronic processor 120 in communication with the at least one sensor 110. The electronic processor 120 can be used to process data generated by the at least one sensor 110 and/or execute instructions received from a server, such as a cloud server, as described in more detail herein. The at least one sensor 110 can be an array of sensors. In most examples, the at least one sensor 110 is/are localised within a housing of the person monitor 100 rather than being distributed throughout a building. The modality and/or processing of data from the at least one sensor 110 can be chosen to preserve the privacy of the monitored person. For example, the at least one sensor 110 can intentionally omit a camera to prevent prying.
The person monitor 100 can further comprise a transceiver 130 in communication with the electronic processor 120. The transceiver 130 can be used for bidirectional communications, e.g. with a gateway and/or cloud server, as described herein. In other examples, transceiver 130 may be replaced by a dedicated transmitter and/or receiver, depending on the requirements of person monitor 100. The person monitor 100 can further comprise an electronic storage device 140 in communication with electronic processor 120. Electronic storage device 140 can be used to store configuration information and other information in electronic memory.
A power supply 150 supplies power to the person monitor 100 and its individual components. In some examples, the power supply 150 can be a battery that supplies all power consumed by the person monitor 100. In other examples, the power supply 150 can include a wired power connection or may include a back-up battery alongside a wired power connection.
In some examples the person monitor power supply 150 can comprise a very low power and very high efficiency switching regulator with adjustable regulated voltage output. The output voltage can be dynamically adjusted by electronic processor 120 to provide reduced power consumption during idle times or when those at least one sensors 110 with low-voltage power supply capability are operational. In other examples electronic processor 120 may also control power to the at least one sensors 110 that have integrated shutdown capability to reduce consumption to very low levels or control external sensor power supply switches to eliminate power consumption altogether when the at least one sensors 110 are not in use.
Person monitor 100 can further comprise a human-machine interface (HMI) 160. The exact configuration of the HMI 160 can depend on the application of person monitor 100. In many examples, the person monitored by person monitor 100 does not need to interact with or understand the function of the person monitor 100, and the person monitor 100 can passively operate in the background without input from the monitored person. In these examples, HMI 160 can be a basic interface including, for example, one or more LEDs used to indicate a status (e.g. connection status, power status, etc.) of person monitor 100. In other examples, HMI 160 can be more sophisticated and may include one or more interfaces (e.g. buttons) that allow the monitored person or other personnel to actively interact with person monitor 100.
Person monitor 100 is typically installed in a building or facility where the monitored person resides, and is typically fastened to a wall or ceiling of a room. For example, if the person to be monitored is an elderly person living independently, then the person monitor 100 can be installed on a wall of a room of the house of the elderly person. In a further example, the person to be monitored may be a person recovering from surgery or in post-operative care in a healthcare facility, and the person monitor 100 may be installed on a wall of a room of the healthcare facility. Two or more person monitors 100 may be installed in the relevant location depending on e.g. the size of the house and/or the number of rooms. In some examples, person monitor 100 can be configured for simple and toolless installation (e.g. using an adhesive backing without requiring screws), particularly in examples where power supply 150 is a battery.
Example sensor hardware
In some examples, the at least one sensor 110 can comprise an audio sensor, such as a microphone. The audio sensor can be configured to measure ambient sound levels (such as power, pressure, or intensity levels). Measuring ambient sound levels can help preserve the privacy of the monitored person, as the sound levels can be independent of the actual content of the sound.
Subsequent processing of the raw audio sensor data (either by electronic processor 120 of the person monitor 100 and/or by offsite processing at e.g. a cloud server, as described in more detail herein) can provide insights into events, lack of events, periodicity and timing of such events and their sound level attributes within the local audio environment. These insights can be used to help determine emerging trends in the monitored person, detect deviations in established patterns, or to trigger alerts, as described in more detail herein.
The at least one sensor 110 can additionally or alternatively comprise a pressure sensor configured to measure the absolute pressure of the ambient environment. Atmospheric pressure data can be used by person monitor 100 (or associated backend/cloud services) to increase the accuracy of other measurements via sensor fusion, and/or can provide additional functionality in combination with other sensors via sensor fusion. Ambient pressure measurements acquired by the pressure sensor can also be compared with external atmospheric pressure measurements.
The at least one sensor 110 can additionally or alternatively comprise a temperature sensor used to measure and report ambient temperature.
The at least one sensor 110 can additionally or alternatively comprise a humidity sensor (e.g. an H2O sensor). The humidity sensor can be configured to measure local humidity within the vicinity of person monitor 100. Data from the humidity sensor can be combined or fused with other sensor data (e.g. temperature data) to infer overall personal comfort levels of the monitored person. Local humidity and temperature data from the at least one sensor 110 can also be compared with external humidity and temperature data, as described in more detail herein.
The at least one sensor 110 can additionally or alternatively comprise an ambient light sensor. The ambient light sensor can be configured to sense ambient visible light and/or invisible light, such as infrared radiation. The ambient light sensor can sense and output a measured light level which can be used to assist in location monitoring and/or presence detection (either in isolation or in combination with other sensor data via sensor fusion), and can also be used as an input to higher- level algorithms via sensor fusion.
The at least one sensor 110 can additionally or alternatively comprise a CO2 sensor configured to measure ambient CO2 levels within the proximity of person monitor 100. Measurements of ambient CO2 levels about person monitor 100 can be used for location monitoring and/or presence detection, determination of the number of personnel within a room, and to infer movement events or movement of personnel, as described in more detail herein.
The at least one sensor 110 can additionally or alternatively comprise a volatile organic compound (VOC) sensor. There are a variety of VOCs that can affect personal comfort and overall air quality. Measurement of VOC levels can be an effective way to detect changes to air quality and detect detrimental environmental changes overall. Analysis of the trends in certain VOCs can further be used to recognise or detect the onset of certain health conditions or health changes in the monitored person. For example, trends in certain VOC markers can indicate the onset of certain types of cancer. Measurement and analysis of VOCs within the environment can also be used to determine or infer a state or activity of the monitored person. For example, VOC levels and/or changes to measured VOC levels can indicate that the monitored person is cooking, cleaning, or moving through the building.
In some examples, the VOC sensor can be configured to output an indication of VOC levels without particular selectivity between VOCs. In other examples, the VOC sensor can be configured to discriminate between VOCs and to measure different levels of different VOCs.
The at least one sensor 110 can additionally or alternatively comprise a particulate matter sensor, such as a PM2.5 sensor. The presence and level of particles of different sizes in the ambient air can affect air quality and personal health, particularly in the cases where the monitored person may have a respiratory ailment. PM content can be measured across a range of particle sizes.
It will be understood that the at least one sensor 110 can include any one or more of the above example sensors in any combination, in addition to other sensors such as NO2 sensors, oxygen sensors, airflow sensors, vibration sensors, and chemical sensors amongst others.
Networked person monitors
The example person monitors disclosed herein are typically connected to one or more devices in a network of devices. Figure 2 illustrates an example network 200 comprising at-home healthcare device 205, person monitor 210, gateway 220, cloud server 230, carer group 240, emergency services 250, and external database
260.
In this example network 200, person monitor 210 is configured to bi-directionally communicate with gateway 220 via a transceiver. Person monitor 210 is typically configured for wireless communication with gateway 220 and/or cloud server 230, although may be configured for wired communication in other examples. The transceiver of person monitor 210 can be configured to communicate using a number of different possible modalities and protocols.
Gateway 220 can be similarly configured for bi-directional communication with cloud server 230 and can also comprise a transceiver. These communications can be wireless or through one or more wired connection. In some examples, gateway 220 can include processing capabilities (through the use of e.g. physical or virtual electronic processors) or other functionalities. In other examples, gateway 220 can be configured only to relay communications. In some examples, gateway 220 can act as a gateway for multiple person monitors 210, 212, and 214.
Cloud server 230 can comprise virtual or physical processors that can be used to process data originating from person monitor 210 via gateway 220. In other examples, person monitor 210 can directly forward data originating from its associated sensor(s) to a processor of cloud server 230 for processing, without an intervening gateway. Cloud server 230 can also process data that has been at least partially processed by the electronic processor of person monitor 210 and/or gateway 220. This can reduce the requirements of the electronic processor on board person monitor 210. Cloud server 230 can also comprise electronic storage (e.g. electronic memory) for storing data received and/or processed by cloud server 230.
Cloud server 230 may also be in bi-directional communication with additional gateways, e.g. gateway 222, as part of a wider network servicing multiple person monitors. These additional gateways may be associated with additional person monitors (not depicted).
The example network 200 includes at-home healthcare device 205. Person monitor 210 can be configured to communicate with at-home healthcare device 205 to receive data therefrom and to relay this data to cloud server 230. Data from the at-home healthcare device 205 can be used by the person monitor 210 or associated cloud server 230 to determine a state of the monitored person, as described herein. The healthcare device 205 may be, for example, a blood pressure monitor or blood glucose monitor. Healthcare device 205 may communicate with person monitor 210 using, for example, Bluetooth Low Energy, although other communication protocols can be used in other examples.
In example network 200, carer group 240 can communicate with cloud server 230 and person monitor 210. Carer group 240 can comprise a group of carers for the monitored person. For example, if the monitored person is an elderly person living independently, then carer group 240 may comprise family members of the monitored person. In another example, if the monitored person is a patient who is recovering or receiving post-operational care, then carer group 240 may comprise doctors or other healthcare professionals. In some examples, individuals within carer group 240 may have different levels of authorisation or access to data, as described in more detail herein. Similarly, the membership of carer group 240 and their associated level of authorisation may be dictated by the monitored person.
The carer group 240 may be able to communicate directly with person monitor 210 or may send or receive information through gateway 220 and/or cloud server 230, depending on the configuration of network 200 and its constituent components. For example, cloud server 230 can be configured to issue alerts to carer group 240 based on data received from person monitor 210. These alerts can relate to health insights regarding the monitored person, changes or trends in their behaviour or condition, and/or can relate to acute emergencies or situations requiring intervention or welfare checks. In some configurations, carer group 240 can also voluntarily access data stored or processed by cloud server 230 for review, without the issuance of an alert. For example, carer group 240 may have access to data stored on cloud server 230 so they can analyse trends or patterns as recorded by person monitor 210 of their own volition.
Carer group 240 can communicate with person monitor 210, gateway 220, and/or cloud server 230 through e.g. personal devices such as person computers, cell phones or smartphones, tablets, and other electronic devices. In other example networks, carer group 240 may only communicate with cloud server 230, and may not be able to directly communicate with person monitor 210. For example, person monitor 210 may only produce raw data or data that is not easily interpreted without processing or formatting which may take place at cloud server 230.
In the example network 200 shown in Figure 2, cloud server 230 is in communication with emergency services 250. For example, if the electronic processor of person monitor 210 (or real/virtual processor of gateway 220 and/or cloud server 230) determines that an acute emergency is occurring, the person monitor 210 (or gateway 220 or cloud server 230) may issue an alert directly to emergency services 250 to request, for example, an ambulance. In other networks, person monitor 210, gateway 220, cloud server 230, and/or carer group 240 can be in communication with emergency services 250. Emergency services 250 can also comprise health service providers (e.g. doctors) in some examples.
Example network 200 further includes an external database 260 in communication with cloud server 230. External database 260 can comprise external information such as external weather information (e.g. pressure, temperature, humidity data, precipitation, and/or wind) not directly measured by person monitor 210. Other external information can include e.g. television schedules, calendar information relating to external events, and other forms of information that may not directly derive from person monitor 210, but may be used to detect trends or events relating to the monitored person, as described in more detail herein.
It should be understood that network 200 is only an example, and other networked person monitors and systems can be configured differently. In other examples of networked person monitors, the network may not include a gateway, and the person monitor may directly communicate with a cloud server. The person monitor can also be configured for communication with both the gateway and the cloud server.
Several different communication protocols can be used in the example network 200. For example, person monitor 210 can be configured to communicate with gateway 220 via RF communications and may be configured as a LPWAN. Gateway 220 can also communicate with cloud server over a WAN or LPWAN in some examples, also using e.g. RF communications. Other known communication protocols or architectures can be used in other examples of networked person monitors.
Pattern learning
The person monitors disclosed herein are configured to determine the state of the monitored person and to detect long-term trends in their health or wellness, detect medium- to short-term events that may benefit from or require intervention, and to detect acute emergencies or events that may require immediate assistance. In some cases, these trends and/or events can be detected by deviations in learned patterns of the monitored person, or from other cues independent of learned patterns. In other cases, trends and/or events can be detected with reference to established patterns that are independent of the learned patterns of the monitored person. As described in more detail herein, the detection of deviations from established patterns or other cues can be made using local processing of sensor data directly at the person monitor, and/or using offsite processing at a networked cloud server or gateway.
The disclosed person monitors and/or associated cloud servers can be configured to learn one or more patterns of the monitored person using one or more pattern machine learning models operating on an electronic storage device of the person monitor and/or associated cloud server. Data which is acquired from the at least one sensor of the person monitor can be processed to infer the monitored person's states, actions, activities, and other characteristics, as described in more detail herein. These inferred states and other variables can then be used as inputs to the pattern machine learning model to construct patterns of the monitored person.
Figure 3 depicts an example system for learning a pattern of a monitored person. Person monitor 310 is installed in a residence of the monitored person and communicates with cloud server 320, potentially via a gateway (not depicted). Cloud server 320 hosts pattern machine learning module 330.
In some examples, the pattern machine learning model 330 can be at least partially pre-trained based on existing data 340 stored within cloud server 320. In other examples, existing data 340 may be stored in an external database. The existing data 340 can include statistical averages or statistical patterns for people that have the same or similar characteristics as the monitored person, such as age, gender, socioeconomic group/status, area of living, lifestyle, etc. The existing data 340 may describe, for example, one or more patterns comprising waking/sleeping times, eating/hygiene cycles, activity schedules, and other daily activities that represent statistical averages for the monitored person's cohort. The pattern machine learning model 330 which is pre-trained using existing data 340 can then be adjusted and refined over an adjustment period using raw and/or processed sensor data from the person monitor 310 to customise the pattern machine learning model 330 for the monitored person's patterns. For example, data acquired by the sensors of the person monitor 310 can be processed to determine activities and states of the monitored person. These can be correlated with time and used to learn the monitored person's pattern.
In other examples, the person monitor 310 may not be provided with training data 340 or a pre-trained machine learning model, and the pattern machine learning model 330 may be trained exclusively using data acquired by the person monitor 310 over a training period.
Additionally, the pattern machine learning model 330 used by the person monitor and associated cloud server 310 or backend systems can continuously learn and adjust to changes in the monitored person's patterns. For example, the pattern machine learning model 330 can be a recurrent neural network, such as a long short-term memory network. This can allow the pattern machine learning model 330 to update its learned patterns and to discern between deviations from established patterns and gradual changes to established patterns. In other examples, pattern machine learning model 330 can implement other kinds of recurrent neural networks or machine learning architectures.
In some examples, the patterns of the monitored person as learned by the pattern machine learning model 330 can relate to activities, such as cooking, eating, watching television, showering, toileting, etc. The patterns can further relate to states, such as awake, asleep, at rest, active, etc. Patterns including these activities and/or states can be expressed in terms of a schedule with expected start times and/or end times.
Furthermore, the pattern machine learning model 330 can learn multiple patterns that occur over different time scales. For example, while the monitored person will often have a pattern over a 24-hour cycle, they may also have patterns that occur over larger timescales, such as weekly or monthly timescales, and the pattern machine learning model 330 can be configured to learn these patterns. For instance, the monitored person may leave their accommodation every Sunday morning to attend church, and the pattern machine learning model 330 may establish a weekly pattern including their absence from the house every Sunday morning. Patterns over still larger timescales can also be detected and learned, such as patterns corresponding with seasons of the year (for example, spending more time outside during summer months). For these larger timescale patterns, some amount of pre-training of pattern machine learning model 330 can be useful if the amount of time needed to build up sufficient training data is significant.
Similarly, the monitored person can behave in patterns that occur over shorter timescales, such as hours, and the pattern machine learning model 330 can be configured to learn these patterns. For instance, certain monitored persons may have very well-defined routines such as hygiene cycles. These routines may be expressed as patterns learned by the pattern machine learning model 330 or as sub-patterns that comprise part of a larger pattern learned by the pattern machine learning model 330.
Example health or wellbeing patterns can include sleep patterns, morning vigour patterns, absence patterns (e.g. away-ness from home), social engagement patterns, and food preparation patterns amongst others.
Once the pattern machine learning model 330 has learned one or more patterns of the monitored person, the person monitor 310 monitors the person to detect deviations or variations from the learned patterns. Data sensed by the one or more sensors of person monitor 310 can be processed at the person monitor 310 and/or cloud server 320 (and/or any intervening gateway) to determine the person's state or activity. This state and/or activity can then be used as an input for the pattern machine learning model 330. Detected deviations or variations from learned patterns can be used to indicate long-term trends in the monitored person, medium-term trends that require or may benefit from intervention by carers, or acute episodes requiring an immediate response (e.g. by emergency personnel).
It should be understood that although Figure 3 depicts pattern machine learning model 330 hosted on cloud server 320, this may not be the case in other examples. For example, pattern machine learning 330 may be hosted on person monitor 310 in other examples, without the need for person monitor 310 to communicate with cloud server 320 to learn patterns. In still further examples, pattern machine learning model 330 may be hosted on a gateway in the network, or distributed across multiple devices in a network.
Detecting deviations from established patterns
Figure 4 depicts an example method of detecting trends in a monitored person. Data is acquired from at least one sensor of a person monitor at 410. The data may optionally be processed at 420, either using a processor of the person monitor or an offsite processor (e.g. on the associated cloud server.) The sensor data acquired at 410 and/or processed at 420 is compared with the pattern as learned by pattern machine learning module at 430 to determine if the sensor data (and/or processed sensor data) represents a deviation from the learned pattern of the monitored person. If a deviation is not detected, then the learned pattern may be reinforced at 450 or no action may be taken. If the sensor data indicates a gradual change in the learned pattern, the pattern may be updated at 460. If a deviation is detected, then the sensor data may be analysed to detect trends or events at 470. Additionally or alternatively, if an emergency or event is detected then an alert may be issued at 480.
For example, sensor data acquired at 410 and processed at 420 may suggest that the monitored person is going to bed at the usual time, but is falling asleep later and later in the night. This data can be compared against the person's usual sleep patterns as learned by the pattern machine learning module at 430 and a trend identified at 470. A carer group can then be informed that the monitored person is suffering from insomnia based on the trend identified at 470. In other examples, the detected trend of insomnia may be analysed at 470 in other combinations with other identified trends in order to identify trends or health conditions that are responsible for the insomnia. For example, the insomnia detected from sensor data may be fused with other identified trends to determine that the monitored person is suffering from the onset of dementia.
In other examples, the acquired/processed sensor data may suggest that the monitored person is restless in the night during specific durations or stages of their sleep, or may be restless during times at which they typically tend to be asleep. These trends may be analysed at 470 to determine that the monitored person is suffering from the onset of respiratory unwellness.
The sensor data that is acquired at 410 and/or processed at 420 can additionally be compared to one or more reference patterns at 440. The one or more reference patterns can be used to detect emerging patterns that are indicative of health trends or episodes requiring intervention, but without reference to the monitored person's patterns of living as learned by the pattern machine learning model. Further analysis to detect trends or events can take place at 470 depending on the comparison, or an alert can be issued at 480. If the comparison at 440 does not indicate anything out of the ordinary, the reference pattern referenced at 440 does not necessarily need to be changed or updated.
For example, the sensor data acquired at 410 and/or processed at 420 may suggest that the monitored person is toileting more and more frequently. These behaviours can be compared to the learned pattern at 430 to detect deviations therefrom, but can also be compared to a reference pattern at 440 which represents general patterns that people exhibit when suffering from certain health conditions, such as problems with kidney function, voiding proportion, or other urological conditions that could benefit from early treatment or intervention. If this analysis suggests that the behaviour of the monitored person coincides with a pattern of reduced kidney function or urological pathologies - irrespective of the person's usual habits represented by their learned pattern - then the person monitor and/or cloud server may issue an alert at 480 or analyse for trends at 470. Although the reference pattern used at 430 is not personalised for the monitored person, it may be statistically weighted based on the monitored person's characteristics (e.g. age, sex, etc.)
The example method depicted in Figure 4 shows the possibility of issuing an alert after comparing sensor data to a learned pattern or reference pattern. In some examples, the person monitor and/or cloud backend may be configured to issue an alert based on raw or processed sensor data (e.g. data from 410 or 420), without reference to any particular pattern, if emergency cues are met. Example cues include audible calls for help or pre-defined audio criteria as detected by an audio sensor of the person monitor, as further described herein.
Ultra-low power person monitors
In some examples, the processing of sensor data (as at 420 with respect to Figure 4) originating from the person monitor can be substantially or entirely handled offsite, such as at an associated cloud server. Similarly, analysis of raw or processed sensor data to e.g. determine variations from established patterns (as at 430 or 440 with respect to Figure 4) can also be substantially or entirely handled offsite. This can drastically reduce the requirements for the local electronic processor on board the person monitor, which can allow for person monitors with very low power consumption. In some examples, and with reference to Figure 1, a person monitor 100 can be configured to monitor a person, and can comprise at least one sensor 110, an electronic processor 120 in communication with the at least one sensor 110, a transceiver 130 in communication with the electronic processor 120, and an electronic storage device 140 in communication with the electronic processor 120. The power supply 150 can be a battery, and the person monitor 100 can be configured so that power consumed by the person monitor 100 is provided by the battery of the person monitor 100. In some examples, the power consumed by the person monitor 100 can be no greater than 1 mW. In further examples, the power consumed by the person monitor may be no greater than 1 mW over a 1 second burst. In further examples, the power consumed by the person monitor may no greater than 1 mW over four seconds per hour.
In some examples, the battery can supply power to the person monitor 100 for up to 6 to 10 years depending on how person monitor 100 is configured.
The person monitor 100 can be a component of a system including a cloud server. The cloud server can be in communication with the person monitor via the transceiver of the person monitor, as described herein.
In some examples, the person monitor 100 can be configured to have an idle power consumption no greater than 50 pW.
In order to ensure that the power consumption of the person monitor 100 does not exceed 1 mW over the specified time range, the hardware of the person monitor must be chosen and operated appropriately. For example, the electronic processor 120 can be a low-power processor such as a 32-bit ARM® Cortex®-M4F.
The transceiver 130 of the person monitor 100 can be configured to communicate with cloud server using LPWAN. LPWAN can be particularly suitable for low-power examples of person monitor 100 as it combines excellent data transfer rates, good range, and micropower requirements. For example, the transceiver 130 can consume no more than 50 mA of current for 5 ms during transmission or reception. Other low-power communication protocols can also be used.
Communications can also be scheduled with the associated cloud server in order to not exceed maximum power requirements for person monitor 100. For example, the person monitor 100 can be scheduled to communicate to provide delivery of meaningful information while still operating within its low-power requirements.
In some examples, the electronic storage device 130 of the person monitor 100 can comprise sensor configuration information comprising a sensor activation state and/or a sensor acquisition rate. The sensor activation state and acquisition rate within the sensor configuration information can dictate whether a given sensor 110 of the person monitor 100 is activated and its associated rate of data acquisition. The associated cloud server can be configured to dynamically adjust or update the sensor configuration information, as described herein. This can be used to keep power consumption by person monitor 100 below the required threshold. Such a configuration can also allow for the person monitor to use standard sensors that can have high sensitivity and high resolution, or may not be configured for ultra-low power consumption, without exceeding the overall power consumption requirements of person monitor 100.
Sensor configuration updates
Figure 5 depicts an example system comprising a person monitor 500 and a cloud server 550. The person monitor 500 is configured to monitor a person and comprises at least one sensor 510, a transceiver 530 in communication with cloud server 550, an electronic storage device 540, and an electronic processor 520 in communication with the at least one sensor 510, the transceiver 530, and the electronic storage device 540. The electronic storage device 540 comprises sensor configuration information 542 comprising a sensor activation state 544 and/or a sensor acquisition rate 546. The at least one sensor 510 is configured to acquire data at a rate based at least partially on the sensor acquisition rate 546 and/or is activated or deactivated based at least partially on the sensor activation state 542.
Cloud server 550 is configured to update the sensor configuration information 542 stored within electronic storage device 540. The electronic processor 520 of the person monitor 500 is configured to receive, from the cloud server 550 via the transceiver 530, updated sensor configuration information; update the sensor configuration information 542 within the electronic storage device 540 with the updated sensor configuration information; and adjust a sensor activation state 544 and/or sensor acquisition rate 546 based at least partially on the updated sensor configuration information.
For example, the cloud server 550 can monitor the data stream output from the at least one sensor 510 of the person monitor 500 to determine if the at least one sensor 510 sensor needs to be activated at the present time and, if so, what its data acquisition rate should be. This can reduce power consumption of the person monitor 500 by reducing the rate at which data is acquired by the at least one sensor 510, or by deactivating the at least one sensor 510 entirely if its output is not required. Although this configuration can be useful where the person monitor 500 is an ultra-low power person monitor 500, it should be understood that other person monitors without ultra-low power requirements may also be configured to receive updated sensor configuration from an associated cloud server.
In some examples, the cloud server 550 can be configured to receive sensor data from the person monitor 500, generate updated sensor configuration information based at least partially on the received sensor data, with the updated sensor configuration information 542 comprising an updated sensor activation state and/or at least one updated sensor acquisition rate, and push the updated sensor information to the person monitor 500.
Factors that can influence changes or updates to the sensor activation state 544 and/or sensor acquisition rate 546 can include detected deviations from established patterns or expected behaviour. For example, the person monitor 500 can continuously locally analyse behaviour modes of the monitored person and can provide information to the cloud server. The cloud server can then detect changes in patterns and can increase the sensor configuration information if the deviation is considered to be of interest. In some examples, the cloud server 550 may use one or more machine learning models to determine whether certain deviations or sensed data are of interest and to adjust sensor information accordingly.
Although the cloud server 550 can be configured to reduce acquisition rates or deactivate the at least one sensor 510 entirely, in some situations, the cloud server 550 may determine that more granular measurements from the at least one sensor 510 are required, and may increase the sensor acquisition rate 546 accordingly. The cloud server 550 can also determine that a certain measurement or determination based on some sensor data is ambiguous, and sensor fusion from an additional sensor is required to resolve the ambiguity. In these situations, cloud server 550 may activate the at least one sensor 510 in order to acquire the data necessary to resolve the ambiguity.
In some examples where the person monitor 500 is configured for ultra-low power consumption, the power consumed by the person monitor 500 may be no greater than 1 mW. The power consumed by the person monitor 500 may be supplied exclusively by a battery on-board the person monitor 500.
In some examples, the at least one sensor 510 can comprise a SHT40-AD1B-R3 temperature and relative humidity sensor as manufactured by Sensirion. In further T1 examples, the at least one sensor can comprise a APDS-9250 light sensor as manufactured by Broadcom. In further examples, the at least one sensor 510 can comprise a SN-GCJA5L PM2.5 sensor as manufactured by Panasonic.
In some examples, the sensor acquisition rate 546 can be 50 milliseconds per reading in bursts of measurements in periodic intervals.
Rules-based logic for state determination
The person monitors and/or associated cloud servers described herein can be configured to determine the state of a monitored person based at least partially on sensor data. Determining and tracking the different states of the monitored person can be used to detect deviations from patterns of behaviour and changes or trends, such as long-term health trends, for the monitored person.
For example, a person monitor can determine that a monitored person is sleeping based on CO2 measurements and audio measurements. Other data, such as the time of day and the monitored person's established patterns, can also be used as inputs for the determination. The person monitor can further be configured to discriminate between different kinds of sleep, such as restful sleep or restless sleep, based on sensor fusion. An audio sensor of the person monitor may indicate that there is low audible activity nearby the monitored person, and this data can be fused with data from a CO2 to indicate that the person is sleeping. A low CO2 reading can indicate that the monitored person is in a restful sleep, while a comparatively higher CO2 reading can indicate a restless sleep.
In some instances, the state of a monitored person may be ambiguous given a set of sensor data from the person monitor. While sensor fusion can be used to resolve these ambiguities in many cases, this may be insufficient in some situations. In some examples, the person monitor and/or its associated cloud server can be configured to use rules-based logic to determine the state of a monitored person. Because certain states are causally related to other states, knowledge of a preceding state of the monitored person can restrict the number of possible current states for the monitored person. This can be used to help resolve potential ambiguities in the determined state of the monitored person.
For example, rules-based logic dictates that an 'awakening' state must necessarily follow a 'sleeping' state, as the monitored person can only awaken if they were previously asleep. Sensor data from a person monitor may suggest that the monitored person could be in one of several possible states, at least one of which includes awakening. However, if the monitored person was recently determined to be awake without an intervening sleeping state, then the 'awakening' state can be ruled out (or at least considered very unlikely) from this list of possible states.
To this end, Figure 6 depicts an example person monitor 600 configured to determine the state of a monitored person. The person monitor 600 comprises at least one sensor 610, an electronic processor 620 in communication with the at least one sensor 610, and an electronic storage device 640 configured to store at least one previously determined state 642 and associated time of determination 644. The electronic processor 620 can be configured to determine a state of the monitored person based at least partially on an output of the at least one sensor 610 and the at least one previously determined state 642 and associated time of determination 644 stored within the electronic storage device 640.
Rules-based logic can be used to help determine the possible states that the monitored person may be in given the at least one previously determined state 642. The associated time of determination 644 indicates how recently the state 642 was determined and can be used to weight the probability of possible states for the monitored person. For example, if the monitored person was previously determined to be awake and the associated time of determination 644 was only seconds prior, then the current state of the monitored person is very unlikely to be awakening, as it is very unlikely that the person fell asleep in the intervening time. Conversely, if the associated time of determination 644 was 12 hours prior, then the influence of the at least one previously determined state 642 can be weighted accordingly, as the monitored person's state may have varied considerably over the past 12 hours.
In some examples, the rules dictating how states are causally related to one another can be codified in state rules 646, which is stored in the electronic storage 640 of the example person monitor 600 depicted in Figure 6. In other person monitors, state rules 646 may be stored offsite (e.g. on an associated cloud server) and may be accessed by the person monitor via a transceiver as required. State rules 646 can be pre-defined or can be learned through e.g. machine learning. In still further examples, state rules 646 may be stored on an electronic storage device of a cloud server, with the processor of the cloud server configured to determine the state of the monitored person.
In some examples, the at least one previously determined state 642 used to determine the monitored person's present state is the most recently determined state for the monitored person. In other words, the associated time of determination 644 can be the most recent time stored within the electronic storage device 640 of the person monitor 600 (or electronic storage device of the cloud server, as the case may be.)
In some examples, the list of determined states can include awakening, falling asleep, and sleeping. In further examples, the list of determined states can be more granular and can include broken sleeps with awake periods, out-of-bed periods, and awake-in-bed durations at the end of the night. Although the state determination may be performed by the electronic processor 620 of the person monitor 600, in other examples, person monitor 600 may perform minimal or no data processing, and the determination may be handled by a backend service (e.g. a cloud server). To this end, the person monitor 600 may include a transceiver configured to communicate with an associated cloud server. The cloud server can include an electronic processor and electronic storage device configured to store at least one previously determined state and associated time of determination. The electronic processor of the cloud server can be configured to determine a state of the monitored person based at least partially on the at least one previously determined state and associated time of determination stored within the electronic storage device of the cloud server, in addition to sensor data from the at least one sensor 610 received via the transceiver of person monitor 600.
The states of the monitored person can also be determined using data derived from at-home healthcare devices such as blood pressure monitors, glucose monitors, weigh scales, and dialysis machines. For example, data from these at- home healthcare devices can be received by the person monitor using Bluetooth Low Energy or other communication protocols. In some examples, a person monitor can be configured to monitor a person and can comprise at least one sensor, an electronic processor in communication with the at least one sensor, and a receiver configured to receive data from at least one healthcare device. The electronic processor of the person monitor can be configured to determine a state of the monitored person based at least partially on an output of the at least one sensor and the data received from the at least one healthcare device. In other examples, the person monitor can include a transceiver configured to communicate with a cloud server. The cloud server can include an electronic processor configured to determine a state of the monitored person based at least partially on the output of the at least one sensor and the data received from the at least one healthcare device via the transceiver of the person monitor. Rules-based logic for location determination
The person monitors and/or associated cloud servers disclosed herein can be configured for presence detection and/or to determine the number of personnel within a room, as further described herein. Additionally, some person monitors can be further configured to determine a location of the monitored person.
The granularity or specificity of the monitored person's location as determined by the person monitor can vary depending on how the person monitor is configured. In some examples, the person monitor can be configured to determine the monitored person's location at the level of different rooms within the building housing the monitored person. For example, the location of the monitored person may be determined at the level of the monitored person's bedroom, bathroom, or living room, without an estimation of where exactly the monitored person is within a given room. In other examples, the determined location may be more quantitative or granular. For example, the determined location may be expressed using more specific areas or quadrants within rooms, and/or may include an estimated coordinate with e.g. one or more confidence intervals.
The physical layout of the building housing the monitored person imposes restrictions on how the monitored person can move from location to location. For example, if the building includes a plurality of rooms, then the rooms may be physically laid out so that only some rooms are accessible from other certain rooms - e.g. the person may not be able to move from their bedroom to their bathroom without traversing through a hallway. Similarly, as the monitored person can only move through the building with a finite speed, there is a limit on how far they can feasibly move from a location within a given timeframe.
These limitations can be used to impose rules-based logic on person monitors that are configured to determine a location of the monitored person. For example, there can arise situations where sensor data from a person monitor indicates that the monitored person could be in a number of different possible rooms. However, if the monitored person was recently determined to be in a previous location at a certain time, then rooms or locations which are physically inaccessible from the previous location (or could not be accessed within the relevant timeframe) can be excluded from the list of possible locations of the monitored person.
To this end, Figure 7 depicts an example person monitor 700 configured to determine a location of a monitored person. The person monitor 700 comprises at least one sensor 710, an electronic processor 720 in communication with the at least one sensor 710, and an electronic storage device 740 configured to store at least one previously determined location 742 and associated time of determination 744. The electronic processor 720 can be configured to determinate a location of the monitored person based at least partially on an output from the at least one sensor 710 and the at least one previously determined location 742 and associated time of determination 744 stored within the electronic storage device 740.
In some examples, the person monitor 700 can be provided with or can reference location information 746 which codifies the logical relationship between different possible locations. In the example depicted in Figure 7, location information 746 is stored within electronic storage device 740. In other examples, location information 746 may not be stored locally and may be stored on an associated cloud server and provided as needed. Location information 746 may also be stored in the associated cloud server in examples where the person monitor 700 performs minimal to no data processing, with the location of the monitored person (and any associated time of determination) being determined by the cloud server in communication with person monitor 700.
In some examples, the at least one previously determined location 742 used to determine the monitored person's present location is the most recently determined location for the monitored person. In other words, the associated time of determination 744 can be the most recent time stored within the electronic storage device 740 of the person monitor 700 (or electronic storage device of the cloud server, as the case may be.)
The person monitor 700 can further be configured to determine the location of the monitored person within a building comprising a plurality of rooms, with the location of the monitored person as determined by person monitor 700 corresponding to a room of the plurality of rooms. The person monitor 700 can refer to location information 746 to determine which rooms are accessible (or likely to be accessible) and which are inaccessible (or likely to be inaccessible) from the room corresponding to the most recently determined location of the monitored person over the relevant timeframe (e.g. using associated time of determination 744), and can include or exclude potential rooms on this basis. The remaining list of possible rooms/locations can then be used to resolve any ambiguities in the location of the monitored person. For example, the determined location may be a room that is adjacent to the previously determined location.
The person monitor 700 can exclude or include possible locations based on the previously determined locations 742 and associated times of determination 744. For example, given a list of potential locations that the monitored person may be in, the person monitor 700 can be configured to categorise these as either 'possible' or 'not possible'. In other examples, the person monitor 700 can be configured to determine a probability to the possible locations. This may be qualitative (e.g. 'likely' or 'unlikely') or quantitative (e.g. 37%). These determined probabilities can then be fed to e.g. a location determination classifier in addition to raw or processed sensor information in order to determine the location of the monitored person. In some examples, the location information 746 may be pre-defined and provided directly to the person monitor 700 (and/or associated cloud server). For example, if the monitored person resides in their house, the house may be mapped and represented in the form of a mathematical graph, with nodes representing rooms and edges between nodes representing physically accessible routes linking rooms together, potentially with edge weights corresponding to distances between rooms. In some examples, the location information 746 can be provided as complete information at the time of installation, without the need for person monitor 700 (and/or associated cloud server) to learn or determine the layout of the building itself. The person monitor 700 (and/or associated cloud server) can also be provided with pre-trained presence detection classifiers (or data for training presence detection classifiers) to correlate sensor data to the specified rooms laid out in location information 746.
In other examples, location information 746 can be at least partially learned by the person monitor 700 and/or associated cloud server. For example, person monitor 700 may gather data over a training period to determine occupancy, the number of personnel within rooms, and/or movement events. This data can be assessed using machine learning models to build up location information 746. For example, the person monitor 700 (and/or associated cloud server) can use unsupervised machine learning models, such as k-means clustering, to identify rooms or different locations that the monitored person moves between. Spatial relationships between the different locations identified via machine learning can also be learned and codified within location information 746. Alternatively, location information 746 may be constructed by person monitor 700 and/or associated cloud server using supervised training with labelled data.
In still further examples, location information 746 can be constructed using a combination of different approaches. For example, person monitor 700 or its associated cloud server can be trained to determine the state or activity of a person and can determine the location or position of the monitored person using presence detection, as disclosed herein. Unsupervised learning can be used to identify locations frequented by the monitored person, and those locations can be contrasted against determined states and activities. These determined states and activities can then be used to label the locations identified using machine learning. For example, if the person is consistently determined to be asleep in a certain location, then the person monitor 700 or associated cloud server can associate that location with a bedroom. Similarly, if the person is consistently determined to be cooking in a certain location, then the person monitor 700/cloud server can associate that location with a kitchen.
In still further examples where the viability of a potential location is assessed by estimating the required speed that the monitored person would need to move in order to be at that location, then location information 746 can comprise a range of possible or feasible speeds for the monitored person. These can be provided directly (using e.g. statistical speeds for the cohort of the monitored person) or can be assessed or learned by through measurements.
Although the above disclosure is framed in terms of person monitor 700 determining the location of the monitored person, in other examples, the person monitor 700 may perform minimal or no data processing, and the location of the monitored person may be determined by a processor of a server such as a cloud server in communication with the person monitor 700.
To this end, in some examples, person monitor 700 can include a transceiver configured to communicate with an associated cloud server. The cloud server can include an electronic storage device configured to store at least one previously determined location and associated time of determination and an electronic processor. The electronic processor of the cloud server can be configured to determine a location of the monitored person and associate the determined location with a time of determination based at least partially on the at least one previously determined location and associated time of determination stored within the electronic storage device of the cloud server, in addition to sensor data from the at least one sensor 710 received via the transceiver of the person monitor 700.
Location monitoring and presence detection
Person monitors disclosed herein can be configured to detect the presence of the monitored person(s) based on sensor data. For example, a person monitor can be configured to detect a person's presence based at least partially on sensor data from at least one CO2 sensor and at least one H2O sensor. The CO2 sensor data and H2O sensor data can be fused for presence detection. The person monitor (and/or associated cloud server) can further be configured to compare the CO2 sensor data and H2O sensor data with external CO2 and/or H2O information (or other weather information).
For example, the quantity and location of H2O and CO2 sources (produced from e.g. equipment within the home and/or people within the home) is measurable as a function of the emission from the same sources, the detected state of ventilation within the building, outdoor or external H2O and CO2 values, indoor H2O and CO2 measurements from at least one location within the home, and the rate of changes (and rate of rate of changes) at monitored locations.
In some examples, it can be advantageous to not only detect the presence of a person within a room, but also to determine the number of people within a room. This information can be used, for example, to help rationalise other sensor measurements. For instance, if multiple people are inside of a single room, a CO2 sensor of a person monitor may read an abnormally high CO2 level on the assumption that the room is only occupied by the monitored person, and may issue an alert or initiate an emergency response on this basis. However, while the CO2 measurements may be abnormally high for a single individual, the readings may actually be normal for a room inhabited by multiple people.
To this end, Figure 8 depicts a person monitor 800 configured to monitor at least one person within a building comprising at least one room. The person monitor 800 comprises at least one CO2 sensor 812, at least one H2O sensor 814, and an electronic processor 820 in communication with the at least one CO2 sensor 812 and at least one H2O sensor 814. The electronic processor 820 is configured to determine a number of people within a room of the building based at least partially on an output of the at least one CO2 sensor 812 and an output from the at least one H2O sensor 814.
In this example, person monitor 800 further includes a transceiver 830 that can receive information from external database 860. The person monitor 800 can be configured to communicate with external database 860 via intermediaries, such as gateways and/or a cloud server (not depicted). External database 860 can comprise external information such as external weather information (e.g. pressure, temperature, and/or humidity data) not directly measured by person monitor 800.
The electronic processor 820 of person monitor 800 can additionally or alternatively be configured to detect a movement event corresponding to a monitored person moving from a first room to a second room, based at least partially on an output from the at least one CO2 sensor 812 and an output from the at least one H2O sensor 814.
For example, the person monitor 800 can evaluate the precise time at which a proximal CO2 source has ceased to be in one part of the monitored home and has e.g. moved to another part of the home, with the precise timing of the movement being discernible from the CO2 measurements. When sensor signals relating to H2O changes are fused in the processing of the person monitor (and/or associated cloud server), then accurate states relating to the movement of people within the building can be attributed, in addition to their vigour and activity.
Although the above disclosure is framed in terms of person monitor 800 determining the number of people within a room of the, in other examples, the person monitor 800 may perform minimal or no data processing, and the determination of the number of people within the room can be determined by a processor of a server such as a cloud server in communication with the person monitor 800.
To this end, in some examples, person monitor 800 can include a transceiver configured to communicate with an associated cloud server. The cloud server can include an electronic processor. The electronic processor of the cloud server can be configured to determine a number of people within a room of the building based at least partially on an output from at least one CO2 sensor 812 and an output from the at least one H2O sensor 814 of the person monitor 800. Similarly, the electronic processor of the cloud server can additionally or alternatively be configured to detect a movement event corresponding to at least one person moving from a first room to a second room based at least partially on at least one CO2 sensor 812 and an output from the at least one H2O sensor 814 of the person monitor 800.
Personalised audio cues
In addition to detecting long-term trends, the person monitors disclosed herein can be configured to detect emergency or acute situations and to issue alerts to carers, healthcare personnel, and/or emergency services. In some cases, these situations can be detected as deviations from the monitored person's established pattern. In other cases, the person monitor can be configured to issue alerts after being triggered by certain cues without reference to the monitored person's pattern of living. For example, a person monitor can be configured to issue an alert if triggered by certain sounds or noises detected by an audio sensor of a person monitor. In some examples, the sounds or noises which trigger an alert can be predefined. For example, the person monitor can be configured to issue an alert if the audio sensor of the person monitor detects the sound of glass breaking. In other examples, the noises or sounds that the person monitor is configured to respond to can be determined or at least partially defined by the monitored person. This can allow the monitored person to have agency over the types of sounds or noises that will trigger an alert.
Figure 9 depicts an example of a person monitor 900 configured to monitor a person. The person monitor 900 comprises an electronic processor 920 and at least one audio sensor 912 in communication with the electronic processor 920. The person monitor 900 further comprises an electronic storage device 940 configured to store at least one audio criteria 942. The audio criteria 942 is at least partially user-defined by the monitored person. The electronic processor 920 is configured to receive data corresponding to a sound sensed by the at least one audio sensor 912, compare the data with the at least one audio criteria 942, and issue an alert based at least partially on the comparison.
In some examples, the audio criteria 942 may be defined over a recording phase during which audio sensor 912 is activated. Sounds created by the monitored person during the recording phase can then be processed and/or recorded. The characteristics of the recorded sound can then be used to define audio criteria 942. The recording phase can be initiated, for example, by the monitored person interacting with the person monitor 900 via an HMI (not depicted). Although the sound recorded during the recording phase may be preserved as a waveform in some examples, it may also be processed and deconvoluted into e.g. its spectral components. In some examples, audio criteria 942 can correspond to mechanical sounds such as knocks, scrapes, and bumps. In further examples, the audio criteria 942 can correspond to a customised sequence of mechanical sounds. For example, the audio criteria 942 can be a sequence of knocks. The monitored person may define the audio criteria 942 by initiating a recording phase and knocking in their desired sequence as audio sensor 912 is activated.
In other examples, audio criteria 942 can correspond to one or more spoken words or vocalised sounds. For example, the monitored person may decide that they want an otherwise innocuous phrase to be used as a trigger phrase for the person monitor to issue an alert.
In some examples, the comparison between the sound sensed by the at least one audio sensor 912 and the at least one audio criteria 942 can be made using a machine learning model. This can operate on electronic processor 920 or may operate on e.g. an associated cloud server. For example, the sound sensed by the at least one audio sensor 912 can be processed and deconvoluted to produce a spectrogram using, for example, a fast Fourier transform. The resulting spectrogram can then be processed using a convolutional neural network to produce a feature map. In some examples, the feature map can further be used as an input for additional machine learning models, such as a linear classifier. In other examples, other kinds of machine learning models can be used. In still further examples, the comparison made by the electronic processor 920 may not use a machine learning model.
Audio criteria 942 can include a number of characteristics relating to the corresponding sound, and these can be used during the comparison by electronic processor 920. For example, the audio criteria 942 can comprise peak intensities with respect to time, average sound pressure levels with respect to time, waveforms, and spectral information. These can be used during the comparison to determine if the sound sensed by the at least one audio sensor 942 corresponds to the audio criteria 942.
Furthermore, audio criteria 942 can correspond to multiple sounds so that an alert can be issued if any one of multiple trigger cues are detected.
In some examples, the audio sensor 912 can be a microphone. Different kinds of microphones can be used, including convention microphones and MEMS-based microphones.
Although the above discussion is framed in terms of person monitor 900 storing audio criteria 942, and electronic processor 920 performing the comparison between the sensed sound and audio criteria 942, an associated cloud server can be configured to perform the comparison in other examples. For example, an electronic storage device of the cloud server can store the audio criteria. Data from the at least one audio sensor can then be relayed to the cloud server and an electronic processor of the cloud server can then perform the comparison between the sensed sound and audio criteria.
To this end, a system for monitoring a person can comprise a cloud server and a person monitor. The person monitor can comprise at least one audio sensor and a transceiver configured to communicate with the cloud server. The cloud server can comprise an electronic processor and an electronic storage device configured to store at least one audio criteria. The electronic processor of the cloud server can be configured to receive data corresponding to a sound sensed by the at least one audio sensor of the person monitor, compare the data with the at least one audio criteria, and issue an alert based partially on the comparison, wherein the at least one audio criteria is at least partially user-defined by the monitored person. In further examples, person monitors can be configured to issue an alert based on audio cues corresponding to spoken words.
In some further examples, person monitors can be configured to issue alerts in response to audible cues that are not defined by the monitored person. For example, a person monitor can be configured to issue an alert if a cry for 'Help!' is heard by an audio sensor of the person monitor. However, differences between individuals (such as accent, vocal inflection, tone, cadence, volume, and other characteristics of speech) can increase the difficulty in accurately recognising these audible cues. These variations can also increase the difficulties in audio recognition even when the audible cues are at least partially defined by the monitored person.
Figure 10 depicts a further example of a person monitor 1000 configured to monitor a person. The person monitor 1000 comprises an electronic processor 1020 and at least one audio sensor 1012 in communication with the electronic processor 1020. The person monitor 1000 further comprises an electronic storage device 1040 configured to store at least one audio signature 1042 and audio machine learning model 1044. The electronic processor 1020 is configured to receive data corresponding to a sound sensed by the at least one audio sensor 1012, compare the data with the at least one audio signature 1042 using the audio machine learning model 1044, and issue an alert based at least partially on the comparison. The audio machine learning model 1044 is at least partially trained using audio training data comprising a sound made by the monitored person.
In some examples, the audio machine learning model 1044 can be at least partially pre-trained using existing audio training data. The existing audio training data can be generic data (e.g. acquired across a very broad cohort of different people) or may be more specific training data relating to the cohort of the monitored person. For example, the training data may be derived from people who have statistically similar characteristics to the monitored person, such as their gender, age, health condition (if this affects their vocalisation), amongst other characteristics. In other examples, the audio machine learning model 1044 may be trained exclusively using data acquired by the person monitor 1000 over a training period.
In some examples, the audio machine learning model 1044 can passively and continuously operate in the background to learn the monitored person's vocal characteristics, and may continually learn the monitored person's vocal characteristics during e.g. background conversations. In other examples, the machine learning model 1044 may be trained over a finite training period during which the at least one audio sensor 1012 records spoken audio by the monitored person. The machine learning model 1044 can also be trained using audio data originating from the monitored person that is provided, for example, by associated carers via e.g. a network of smartphones.
In some examples, the audio machine learning model 1044 can comprise a convolutional neural network. The machine learning model 1044 can further comprise a pipeline of machine learning models such as an additional linear classifier that receives the output from a convolutional neural network.
The audio signature 1042 can be any number of audio cues which are used to issue alerts. For example, audio signature 1042 can comprise a number of pre-defined spoken words. In other examples, the audio signature 1042 can comprise at least one spoken word that is at least partially user defined.
While the example person monitor 1000 includes the audio machine learning model 1044, in other examples, the audio machine learning model 1044 (and optionally audio signature 1042) may be stored on an associated cloud server, with person monitor 1000 sending sensed audio data to the cloud server via at least one transceiver. The electronic processor of the cloud server may also or alternatively perform the comparison using the audio machine learning model. For example, an electronic storage device of the cloud server can store the audio signature. Data from the at least one audio sensor can then be relayed to the cloud server and an electronic processor of the cloud server can then perform the comparison between the sensed sound and audio signature using the audio machine learning model.
To this end, a system for monitoring a person can comprise a cloud server and a person monitor. The person monitor can comprise at least one audio sensor and a transceiver configured to communicate with the cloud server. The cloud server can comprise an electronic processor and an electronic storage device configured to store at least one audio signature. The electronic processor of the cloud server can be configured to receive data corresponding to a sound sensed by the at least one audio sensor of the person monitor, compare the data with the at least one audio signature using an audio machine learning model, and issue an alert based partially on the comparison, wherein the audio machine learning model is at least partially trained using audio training data comprising a sound made by the monitored person. In some examples, the audio machine learning model can be stored on the electronic storage device of the cloud server.
Advantages
By first learning the monitored person's normal pattern of activity and consequently detecting deviations from said, the person monitors and associated cloud servers disclosed herein can be configured to identify trends and events in or relating to the monitored person at an early stage, thereby allowing for early intervention before an emergency response is required. The monitored person is also able to call for immediate help if needed (either from emergency personnel or their trusted carers) by using hands-free phrases or audio cues that can be personalised or customised by the monitored person, without the need for cumbersome wearable devices. The on-board sensors of the person monitors can be contained entirely within the housing of the person monitor. The use of multiple sensors distributed throughout a building, such as pressure sensors under mattresses or water sensors installed in plumbing, is not required. In some examples, the monitored person does not need to interact with the person monitor or understand its function, and so is not presented with any technological barriers.
Processed data and interfered events/trends can be made available to the carer group of the monitored person, allowing the carer to virtually check up on the monitored person to see if everything is as expected. The data and events/trends made available to the carer group can be presented in a way that does not violate the privacy of the monitored person. Furthermore, the monitored person can have control over the membership of their carer group and each carer's respective level of access to data, preserving the privacy of the monitored person and giving them agency over their care. Carers of the monitored can also automatically receive useful and automatic insights into the wellbeing of the monitored person using their personal devices, such as smartphones.
Furthermore, the person monitors disclosed herein can be configured for ultralow power usage and can be powered solely using a battery for years at a time. The cloud server associated with the person monitor can be configured to handle all significant data processing to enable the person monitor to operate at very low power without sacrificing accuracy in the processing of sensor data.
The person monitors disclosed herein can include electronic processors of different capabilities, particularly depending on the power requirements of the person monitor. In some examples, the person monitors may be part of a system and may be networked to an associated cloud server, as discussed herein. Although the person monitors include electronic processors for e.g. handling sensor data, in many instances, the processing capabilities of the person monitors may be limited, with the majority of the processing handled by the associated cloud server.
The person monitors and/or cloud servers disclosed herein can comprise electronic computing devices, and the methods disclosed herein can be computer- implemented methods implemented on those electronic computing devices. Although person monitors and cloud servers comprising electronic processors and electronic storage devices have been disclosed, it should be understood that the person monitors and/or cloud servers can comprise additional components common to electronic computing devices. For example, these can include memory (e.g. a volatile memory such as a RAM) for the loading of executable instructions, the executable instructions defining the functionality that the person monitor or cloud server carries out under control of its electronic processor. In some examples, there may also be a user interface for user control and may comprise, for example, computing peripheral devices such as display monitors, computer keyboards and the like. Furthermore, although 'cloud server' is discussed in the singular, it should be understood that the cloud server may be a single computer, a single server, or have the functionality performed by a server apparatus distributed across multiple server components connected via a communications network (such as a private LAN, WAN, or public internet.)
While the present invention has been illustrated by the description of the examples thereof, and while the examples have been described in detail, it is not the intention of the Applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of the Applicant's general inventive concept.

Claims

CLAIMS:
1. A person monitor configured to monitor a person, the person monitor comprising: at least one audio sensor, an electronic processor in communication with the at least one audio sensor, and an electronic storage device configured to store at least one audio signature; wherein the electronic processor is configured to: receive data corresponding to a sound sensed by the at least one audio sensor, compare, using an audio machine learning model, the data with the at least one audio signature, and issue an alert based at least partially on the comparison; wherein the audio machine learning model is at least partially trained using audio training data comprising a sound made by the person.
2. The person monitor of claim 1, wherein the at least one audio sensor comprises a microphone.
3. The person monitor of claim 1 or claim 2, wherein the audio machine learning model is at least partially pre-trained using existing audio training data.
4. The person monitor of any one of claims 1 to 3, wherein the audio machine learning model comprises a convolutional neural network.
5. The person monitor of any one of claims 1 to 4, wherein the audio machine learning model comprises a linear classifier.
6. The person monitor of any one of claims 1 to 5, wherein the audio machine learning model comprises a linear classifier.
7. The person monitor of any one of claims 1 to 6, wherein the audio signature comprises at least one pre-defined spoken words.
8. The person monitor of any one of claims 1 to 7 , wherein the audio signature comprises at least one spoken word that is at least partially user defined.
9. The person monitor of any one of claims 1 to 8, wherein the person monitor is configured to communicate with a cloud server.
10. A person monitor configured to monitor a person, the person monitor comprising: at least one audio sensor, an electronic processor in communication with the at least one audio sensor, and an electronic storage device configured to store at least one audio criteria; wherein the electronic processor is configured to: receive data corresponding to a sound sensed by the at least one audio sensor, compare the data with the at least one audio criteria, and issue an alert based at least partially on the comparison; wherein the at least one audio criteria is at least partially user-defined by the person.
11. The person monitor of claim 10, wherein the at least one audio sensor is a microphone.
12. The person monitor of claim 10 or claim 11, wherein the audio criteria is defined over a recording phase during which the at least one audio sensor is activated.
13. The person monitor of any one of claims 10 to 12, wherein the audio criteria comprises a mechanical sound.
14. The person monitor of claim 13, wherein the mechanical sound comprises a knock, a scrape, and/or a bump.
15. The person monitor of any one of claims 10 to 14, wherein the audio criteria comprises a sequence of mechanical sounds.
16. The person monitor of claim 15, wherein the sequence of mechanical sounds comprises a sequence of knocks.
17. The person monitor of any one of claims 10 to 12, wherein the audio criteria comprises a spoken word.
18. The person monitor of any one of claims 10 to 17, wherein the comparison is made using a machine learning model.
19. The person monitor of any one of claims 10 to 18, wherein the person monitor is configured to communicate with a cloud server.
20. A system for monitoring a person, the system comprising: a cloud server, and a person monitor in communication with the cloud server, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, a transceiver in communication with: the electronic processor, and the cloud server; and a battery; wherein: power consumed by the person monitor is provided by the battery of the person monitor, and the power consumed by the person monitor is no greater than 1 mW.
21. The system of claim 20, wherein the power consumed by the person monitor is no greater than 1 mW over a 1 second burst.
22. The system of claim 20 or 21, wherein the power consumed by the person monitor is no greater than 1 mW over four seconds per hour.
23. The system of any one of claims 20 to 22, wherein the person monitor is configured to have an idle power consumption no greater than 50 pW.
24. The system of any one of claims 20 to 23, wherein the person monitor is configured so that the transceiver consumes no more than 50 mA of current for 5 ms during transmission or reception.
25. The system of any one of claims 20 to 24, wherein the electronic processor of the person monitor comprises a 32-bit ARM® Cortex®-M4F.
26. The system of any one of claims 20 to 25, wherein the person monitor is configured to communicate with the cloud server using LPWAN.
27. The system of any one of claims 20 to 26, wherein the at least one sensor comprises an SHT40-AD1B-R3 temperature and relative humidity sensor.
28. The system of any one of claims 20 to 27 , wherein the at least one sensor comprises an APDS-9250 light sensor.
29. The system of any one of claims 20 to 28, wherein the at least one sensor comprises an SN-GCJA5L PM2.5 sensor.
30. The system of any one of claims 20 to 29, wherein the battery is configured to supply power to the person monitor for 6 to 10 years.
31. The system of any one of claims 20 to 30, wherein the at least one sensor comprises an audio sensor, a pressure sensor, a temperature sensor, a humidity sensor, a light sensor, a CO2 sensor, a VOC sensor, and/or a particulate matter sensor.
32. A system for monitoring a person, the system comprising: a cloud server, and a person monitor configured to monitor a person, the person monitor comprising: at least one sensor, a transceiver in communication with the cloud server, an electronic storage device storing sensor configuration information, the sensor configuration information comprising: a sensor activation state, and/or a sensor acquisition rate; and an electronic processor in electronic communication with the at least one sensor, the transceiver, and the electronic storage device; wherein: the at least one sensor: is configured to acquire data at a rate based at least partially on the sensor acquisition rate, and/or is activated or deactivated based at least partially on the sensor activation state; and the electronic processor is configured to: receive, from the cloud server via the transceiver, updated sensor configuration information, update the sensor configuration information stored within the electronic storage device with the updated sensor configuration information, and adjust a sensor activation state and/or sensor acquisition rate based at least partially on the updated sensor configuration information.
33. The system of claim 32, wherein the at least one sensor comprises an audio sensor, a pressure sensor, a temperature sensor, a humidity sensor, a light sensor, a CO2 sensor, a VOC sensor, and/or a particulate matter sensor.
34. The system of claim 32 or claim 33, wherein the power consumed by the person monitor is no greater than 1 mW.
35. The system of any one of claims 32 to 34, wherein the power consumed by the person monitor is no greater than 1 mW over a 1 second burst.
36. The system of any one of claims 32 to 35, wherein the power consumed by the person monitor is no greater than 1 mW over four seconds per hour.
37. The system of any one of claims 32 to 36, wherein the person monitor is configured to have an idle power consumption no greater than 50 pW.
38. The system of any one of claims 32 to 37, wherein the person monitor is configured so that the transceiver consumes no more than 50 mA of current for 5 ms during transmission or reception.
39. The system of any one of claims 32 to 38, wherein the sensor acquisition rate is 50 milliseconds per reading.
40. A person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and an electronic storage device configured to store at least one previously determined state and associated time of determination; wherein the electronic processor is configured to: determine a state of the monitored person, and associate the determined state with a time of determination; wherein the determined state of the monitored person is based at least partially on: an output of the at least one sensor, and at least one previously determined state and associated time of determination stored within the electronic storage device.
41. The person monitor of claim 40, wherein the at least one sensor comprises an audio sensor, a pressure sensor, a temperature sensor, a humidity sensor, a light sensor, a CO2 sensor, a VOC sensor, and/or a particulate matter sensor.
42. The person monitor of claim 40 or claim 41, wherein the time of determination associated with the at least one previously determined state is the most recent time stored within the electronic storage device.
43. The person monitor of any one of claims 40 to 42, wherein the determined state is a sleeping state.
44. The person monitor of claim 43, wherein the at least one previously determined states comprises a waking state.
45. The person monitor of any one of claims 40 to 42, wherein the determined state is a waking state.
46. The person monitor of claim 45, wherein the determined state is a sleeping state.
47. A person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, a transceiver in communication with the electronic processor, an electronic storage device in communication with the electronic processor, and a battery; wherein: power consumed by the person monitor is provided by the battery of the person monitor, and the power consumed by the person monitor is no greater than 1 mW.
48. The person monitor of claim 47, wherein the power consumed by the person monitor is no greater than 1 mW over a 1 second burst.
49. The person monitor of claim 47 or claim 48, wherein the power consumed by the person monitor is no greater than 1 mW over four seconds per hour.
50. The person monitor of any one of claims 47 to 49, wherein the person monitor is configured to have an idle power consumption no greater than 50 pW.
51. The person monitor of any one of claims 47 to 50, wherein the person monitor is configured so that the transceiver consumes no more than 50 mA of current for 5 ms during transmission or reception.
52. The person monitor of any one of claims 47 to 51, wherein the electronic processor comprises a 32-bit ARM® Cortex®-M4F.
53. The person monitor of any one of claims 47 to 52, wherein the person monitor is configured to communicate with a cloud server.
54. The person monitor of claim 53, wherein the person monitor is configured to communicate with the cloud server using LPWAN.
55. The person monitor of any one of claims 47 to 54, wherein the at least one sensor comprises an SHT40-AD1B-R3 temperature and relative humidity sensor.
56. The person monitor of any one of claims 47 to 55, wherein the at least one sensor comprises an APDS-9250 light sensor.
57. The person monitor of any one of claims 47 to 56, wherein the at least one sensor comprises an SN-GCJA5L PM2.5 sensor.
58. The person monitor of any one of claims 47 to 57, wherein the battery is configured to supply power to the person monitor for 6 to 10 years.
59. The person monitor of any one of claims 47 to 58, wherein the at least one sensor comprises an audio sensor, a pressure sensor, a temperature sensor, a humidity sensor, a light sensor, a CO2 sensor, a VOC sensor, and/or a particulate matter sensor.
60. A person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and an electronic storage device in communication with the electronic processor, the electronic storage device configured to store a previously determined location and associated time of determination; wherein the electronic processor is configured to determine a location of the monitored person based at least partially on: an output from the at least one sensor, and at least one previously determined location and associated time of determination stored within the electronic storage device.
61. The person monitor of claim 60, wherein the at least one sensor comprises an audio sensor, a pressure sensor, a temperature sensor, a humidity sensor, a light sensor, a CO2 sensor, a VOC sensor, and/or a particulate matter sensor.
62. The person monitor of claim 60 or claim 61, wherein the determined location of the monitored person corresponds to a room within a building housing the monitored person.
63. The person monitor of claim 60 or claim 61, wherein the determined location of the monitored person corresponds to an estimated coordinate.
64. The person monitor of any one of claims 60 to 63, wherein the time of determination associated with the at least one previously determined location is the most recent time stored within the electronic storage device.
65. The person monitor of any one of claims 60 to 64, wherein the person monitor is configured to determine the location of a person within a building comprising a plurality of rooms, and the determined location corresponds to a room of the plurality of rooms.
66. The person monitor of claim 65, wherein the at least one previously determined location corresponds to a room which is physically adjacent to the determined location.
67. The person monitor of claim 65 or claim 66, wherein the determined location is physically accessible from the at least one previously determined location.
68. A person monitor configured to monitor a person, the person monitor comprising: at least one sensor, an electronic processor in communication with the at least one sensor, and a receiver configured to receive data from at least one healthcare device; wherein the electronic processor is configured to determine a state of the monitored person based at least partially on an output of the at least one sensor and the data received from the at least one healthcare device.
69. The person monitor of claim 68, wherein the at least one healthcare device comprises a blood pressure monitor, a glucose monitor, a weigh scale, and/or a dialysis machine.
70. The person monitor claim 68 or claim 69, wherein the receiver is configured to receive data from the at least one healthcare device via Bluetooth Low Energy.
71. The person monitor of any one of claims 68 to 70, wherein the person monitor is configured to communicate with a cloud server.
72. A person monitor configured to monitor at least one person within a building comprising at least one room, the person monitor comprising: at least one CO2 sensor, and at least one H2O sensor; and an electronic processor in communication with the at least one CO2 sensor and at least one H2O sensor; wherein: the processor is configured to determine a number of people within a room of the building, the determination based at least partially on: an output from the at least one CO2 sensor, and an output from the at least one H2O sensor.
73. The person monitor of claim 72, wherein the person monitor comprises a transceiver configured to receive external humidity data and/or external temperature data.
74. A person monitor configured to monitor at least one person within a building comprising a plurality of rooms, the person monitor comprising: at least one CO2 sensor, at least one H2O sensor, and an electronic processor in communication with the at least one CO2 sensor and at least one H2O sensor; wherein: the processor is configured to detect a movement event corresponding to the at least one person moving from a first room to a second room, the detection based at least partially on: an output from the at least one CO2 sensor, and an output from the at least one H2O sensor.
75. The person monitor of claim 74, wherein the person monitor comprises a transceiver configured to receive external humidity data and/or external temperature data.
PCT/NZ2024/050010 2023-02-09 2024-02-08 Person monitor WO2024167421A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NZ79704323 2023-02-09
NZ797043 2023-02-09

Publications (1)

Publication Number Publication Date
WO2024167421A1 true WO2024167421A1 (en) 2024-08-15

Family

ID=92263228

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NZ2024/050010 WO2024167421A1 (en) 2023-02-09 2024-02-08 Person monitor

Country Status (1)

Country Link
WO (1) WO2024167421A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190385711A1 (en) * 2018-06-19 2019-12-19 Ellipsis Health, Inc. Systems and methods for mental health assessment
US20200302951A1 (en) * 2019-03-18 2020-09-24 Wave2Cloud LLC Activity recognition system for security and situation awareness
US20210005069A1 (en) * 2019-07-03 2021-01-07 Aryan Mangal Abuse Alert System by Analyzing Sound
GB2588036A (en) * 2017-06-28 2021-04-14 Kraydel Ltd Sound monitoring system and method
CN113241060A (en) * 2021-07-09 2021-08-10 明品云(北京)数据科技有限公司 Security early warning method and system
US20210287792A1 (en) * 2020-03-11 2021-09-16 Hao-Yi Fan Care system and automatic care method
US20210352176A1 (en) * 2020-04-06 2021-11-11 Koninklijke Philips N.V. System and method for performing conversation-driven management of a call
US20210375278A1 (en) * 2020-06-02 2021-12-02 Universal Electronics Inc. System and method for providing a health care related service
KR20220019340A (en) * 2020-08-10 2022-02-17 한국전자기술연구원 Apparatus and method for monitoring emotion change through user's mobility and voice analysis in personal space observation image
US20220061694A1 (en) * 2020-09-02 2022-03-03 Hill-Rom Services Pte. Ltd. Lung health sensing through voice analysis

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2588036A (en) * 2017-06-28 2021-04-14 Kraydel Ltd Sound monitoring system and method
US20190385711A1 (en) * 2018-06-19 2019-12-19 Ellipsis Health, Inc. Systems and methods for mental health assessment
US20200302951A1 (en) * 2019-03-18 2020-09-24 Wave2Cloud LLC Activity recognition system for security and situation awareness
US20210005069A1 (en) * 2019-07-03 2021-01-07 Aryan Mangal Abuse Alert System by Analyzing Sound
US20210287792A1 (en) * 2020-03-11 2021-09-16 Hao-Yi Fan Care system and automatic care method
US20210352176A1 (en) * 2020-04-06 2021-11-11 Koninklijke Philips N.V. System and method for performing conversation-driven management of a call
US20210375278A1 (en) * 2020-06-02 2021-12-02 Universal Electronics Inc. System and method for providing a health care related service
KR20220019340A (en) * 2020-08-10 2022-02-17 한국전자기술연구원 Apparatus and method for monitoring emotion change through user's mobility and voice analysis in personal space observation image
US20220061694A1 (en) * 2020-09-02 2022-03-03 Hill-Rom Services Pte. Ltd. Lung health sensing through voice analysis
CN113241060A (en) * 2021-07-09 2021-08-10 明品云(北京)数据科技有限公司 Security early warning method and system

Similar Documents

Publication Publication Date Title
US20240099607A1 (en) System, sensor and method for monitoring health related aspects of a patient
US7173525B2 (en) Enhanced fire, safety, security and health monitoring and alarm response method, system and device
US7126467B2 (en) Enhanced fire, safety, security, and health monitoring and alarm response method, system and device
US7148797B2 (en) Enhanced fire, safety, security and health monitoring and alarm response method, system and device
US7129833B2 (en) Enhanced fire, safety, security and health monitoring and alarm response method, system and device
AU2005267071B2 (en) Enhanced acoustic monitoring and alarm response
McCullagh et al. Nocturnal sensing and intervention for assisted living of people with dementia
WO2024167421A1 (en) Person monitor
JP6830298B1 (en) Information processing systems, information processing devices, information processing methods, and programs
Pnevmatikakis Recognising daily functioning activities in smart homes
US12148527B2 (en) Sensor-based monitoring of at-risk person at a dwelling
US20220230746A1 (en) Sensor-based monitoring of at-risk person at a dwelling
US20230059947A1 (en) Systems and methods for awakening a user based on sleep cycle
US20230310688A1 (en) Sensor systems and methods
US20230337953A1 (en) Incontinence detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24753719

Country of ref document: EP

Kind code of ref document: A1