US20210287056A1 - Ultrasound Analytics for Actionable Information - Google Patents
Ultrasound Analytics for Actionable Information Download PDFInfo
- Publication number
- US20210287056A1 US20210287056A1 US17/195,005 US202117195005A US2021287056A1 US 20210287056 A1 US20210287056 A1 US 20210287056A1 US 202117195005 A US202117195005 A US 202117195005A US 2021287056 A1 US2021287056 A1 US 2021287056A1
- Authority
- US
- United States
- Prior art keywords
- person
- drone
- autonomous drone
- property
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000002604 ultrasonography Methods 0.000 title claims description 107
- 238000000034 method Methods 0.000 claims abstract description 49
- 230000036541 health Effects 0.000 claims abstract description 27
- 238000012544 monitoring process Methods 0.000 abstract description 87
- 238000012545 processing Methods 0.000 description 115
- 238000004891 communication Methods 0.000 description 47
- 230000006378 damage Effects 0.000 description 24
- 230000033001 locomotion Effects 0.000 description 23
- 230000007246 mechanism Effects 0.000 description 21
- 208000027418 Wounds and injury Diseases 0.000 description 20
- 208000014674 injury Diseases 0.000 description 20
- 238000003062 neural network model Methods 0.000 description 16
- 230000004044 response Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 13
- 230000037361 pathway Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 238000010801 machine learning Methods 0.000 description 10
- 208000032843 Hemorrhage Diseases 0.000 description 9
- 208000034158 bleeding Diseases 0.000 description 9
- 230000000740 bleeding effect Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000001413 cellular effect Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000005265 energy consumption Methods 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 5
- 230000035945 sensitivity Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003862 health status Effects 0.000 description 4
- 208000010392 Bone Fractures Diseases 0.000 description 3
- 206010061599 Lower limb fracture Diseases 0.000 description 3
- 208000006670 Multiple fractures Diseases 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002459 sustained effect Effects 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000005406 washing Methods 0.000 description 2
- UGFAIRIUMAVXCW-UHFFFAOYSA-N Carbon monoxide Chemical compound [O+]#[C-] UGFAIRIUMAVXCW-UHFFFAOYSA-N 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 206010027783 Moaning Diseases 0.000 description 1
- 206010039740 Screaming Diseases 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 210000001124 body fluid Anatomy 0.000 description 1
- 230000001914 calming effect Effects 0.000 description 1
- 229910002091 carbon monoxide Inorganic materials 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000008867 communication pathway Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 231100000640 hair analysis Toxicity 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 208000037974 severe injury Diseases 0.000 description 1
- 230000009528 severe injury Effects 0.000 description 1
- 210000004927 skin cell Anatomy 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000005641 tunneling Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G06K9/6289—
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/746—Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
- B64C39/024—Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64D—EQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
- B64D47/00—Equipment not otherwise provided for
- B64D47/02—Arrangements or adaptations of signal or lighting devices
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/89—Sonar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0094—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G06K9/00362—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/22—Electrical actuation
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
-
- B64C2201/12—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- This specification relates generally to integrated security technology, and in particular, to integrated security technology to provide actionable information to first responders using ultrasound data.
- Integrated security includes the use of security hardware in place on a property, such as a residential property and a commercial property.
- Typical uses of security at a particular property includes detecting intrusion, detecting unlocked doors, detecting when an individual is harmed at the property, and tripping one or more alarms.
- the subject matter of the present disclosure is related to systems and techniques for gathering information on the health of one or more individuals trapped in an accident to provide actionable information to a first responder system.
- the techniques may use ultrasound, camera images, GPS locational data, and machine learning algorithms to provide the actionable information to the first responder system.
- the machine learning algorithms may include algorithms such as one or more neural network models, Bayesian learning models, or any other type of machine learning technique, to detect injuries of the individuals trapped in the accident.
- the systems may transmit a notification to a first responder system and other individuals that may know the injured individual indicating the individual is trapped and injured.
- the benefit of providing the indication of the injured individual is such that other individuals related to the injured individual can be aware of the status of the injured individual in the case of an emergency, such as a fire, earthquake, or flood, to name a few examples. Additionally, by notifying the first responder system, the first responder system can take one or more steps to save the injured individuals when time is of the essence and the injured individual's life is in severe condition. The one or more steps may include pinpointing the location of the injured individual at a facility that has toppled due to a natural disaster when finding the injured individual is next to impossible with the human eye alone, notifying one or more other individuals of the injured individual's status and location, and determining the injury of the injured individual in an efficient manner to provide the correct care.
- the techniques may utilize a set of sensors including a camera (or an array of cameras), a Global Positioning System (GPS) device, and an ultrasound transducer.
- Each of the sensors may be co-located and mounted on one unit, such as a plane or drone.
- the sensors can communicate to a backend over a WiFi or cellular communication network.
- the backend is responsible for transforming the data provided by each of the cameras, the GPS device, and the ultrasound transducer into one or more various types of data and performing advanced analytics on the various types of data.
- the backend is responsible for aggregating the various types of data into an aggregated map that incorporates all usable information provided by the camera, the GPS device, and the ultrasound transducer.
- the backend may provide the aggregated map to a first responder system such that the first responder system can identify and prioritize providing actionable rescue teams for the identified individuals.
- a method is performed by one or more computers of a monitoring system.
- the method includes generating, by one or more sensors of a monitoring system that is configured to monitor a property, first sensor data; based on the first sensor data, generating, by the monitoring system, an alarm event for the property; based on generating the alarm event for the property, dispatching, by the monitoring system, an autonomous drone; navigating, by the autonomous drone of the monitoring system, the property; generating, by the autonomous drone of the monitoring system, second sensor data; based on the second sensor data, determining, by the monitoring system, a location within the property where a person is likely located; and provide, for output by the monitoring system, data indicating the location within the property where the person is likely located.
- inventions of this and other aspects of the disclosure include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
- a system of one or more computers can be so configured by virtue of software, firmware, hardware, or a combination of them installed on the system that in operation cause the system to perform the actions.
- One or more computer programs can be so configured by virtue having instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- Implementations may include one or more of the following features. For example, in some implementations, based on navigating the property, generating, by the monitoring system, a map of the property; and providing, by the monitoring system, for output, data indicating the location within the property where the person is likely located by providing, for output, the map of the property with the location where the person is likely located.
- the method further includes determining, by the monitoring system, that the person is likely injured based on second sensor data; and providing, for output by the monitoring system, data indicating that the person is likely injured.
- the method further includes based on determining that the person is likely injured, generating, by the autonomous drone of the monitoring system, using an additional onboard sensor, third sensor data; based on the third sensor data, determining, by the monitoring system, a severity of the injury to the person; and providing, for output by the monitoring system, the data indicating that the person is likely injured by providing, for output, the data indicating that the person is likely injured and data indicating the severity of the injury to the person.
- the method further includes the onboard sensor is a camera and the second sensor data is image data, and the additional onboard sensor is an ultrasound sensor and the third sensor data is ultrasound data.
- the method further includes providing, by the monitoring system, second sensor data as an input to a model trained to identify locations of people; and determining, by the monitoring system, the location within the property where a person is likely located based on an output of the model trained to identify locations of people based on the second sensor data.
- the method further includes receiving, by the monitoring system, labeled training data that includes first labeled sensor data that corresponds locations with people and second labeled sensor data that corresponds to locations without people; and training, by the monitoring system, using machine learning, the first labeled sensor data, and the second labeled sensor data, the model to identify locations of people based on the second sensor data.
- the method further includes based on the second sensor data, determining, by the monitoring system, that the person is likely alive; and providing, by the monitoring system, for output, data indicating that the person is likely alive.
- the method further includes the second sensor is a microphone and the second sensor data is audio data
- the method includes providing, by the monitoring system, the audio data as an input to a model trained to identify human sounds; and determining, by the monitoring system, that the person is likely alive based on an output of the model trained to identify human sounds.
- the method further includes based on determining a location within the property where a person is likely located, activating, by the monitoring system, a communication channel between a device outside the property and the autonomous drone.
- FIG. 1 is a contextual diagram of an example system of an integrated security environment for detecting one or more injured individuals at a monitored facility.
- FIG. 2 is a contextual diagram of an example system of a building destruction environment for detecting one or more injured individuals.
- FIG. 3 is a contextual diagram of an example system for training a neural network model for ultrasound analytics.
- FIG. 4 is a flowchart of an example process for providing data corresponding to a detected individual for ultrasound analytics.
- FIG. 5 is a flowchart of an example system for processing data corresponding to a detected individual for ultrasound analytics.
- FIG. 6 is a block diagram of an example integrated security environment for ultrasound analytics that may utilize various security components.
- FIG. 1 is a contextual diagram of an example system 100 of an integrated security environment for detecting one or more injured individuals at a monitored facility.
- system 100 is shown and described including a particular set of components in a monitored property 102 includes a control unit server 104 , network 106 , cameras 108 , lights 110 , sensors 112 , home devices 114 , security panel 126 , drone 130 , network 134 , remote processing unit 136 , and first responder system 140 , the present disclosure need not be so limited.
- only a subset of the aforementioned components may be used by the integrated security environment for monitoring the control unit servers in each monitored property.
- control unit such as control unit server 104
- the remote processing unit 136 is stored in the remote processing unit 136 .
- control unit server 104 is stored in the remote processing unit 136 .
- control unit server 104 is stored in the remote processing unit 136 .
- other alternative systems also fall within the scope of the present disclosure such as a system 100 that does not use a control unit server 104 . Rather, these systems would communicate directly with the remote processing unit 136 to perform the monitoring. For these reasons, the system 100 should not be viewed as limiting the present disclosure to any particular set of necessary components.
- a residential facility 102 (e.g., home) of user 118 is monitored by a control unit server 104 that includes components within the residential facility 102 .
- the components within the residential facility 102 may include one or more cameras 108 , one or more lights 110 , one or more sensors 112 , one or more home devices 114 , and the security panel 126 .
- the one or more cameras 110 may include video cameras that are located at the exterior of the residential facility 102 near the front door 116 , as well as located at the interior of the residential facility 102 near the front door 116 .
- the one or more sensors 112 may include a motion sensor located at the exterior of the residential facility 102 , a front door sensor that is a contact sensor positioned at the front door 116 , and a lock sensor that is positioned at the front door 116 and each window.
- the contact sensor may sense whether the front door 118 , the garage door, or the window is in an open position or a closed position.
- the lock sensor may sense whether the front door 116 and each window is in an unlocked position or a locked position.
- the one or more home devices 114 may include home appliances such as a washing machine, a dryer, a dishwasher, an oven, a stove, a microwave, and a laptop, to name a few examples.
- the security panel 126 may receive one or more messages from a corresponding control unit server 104 and a remote processing unit 136 .
- the control unit server 104 communicates over a short-range wired or wireless connection over network 106 with connected devices such as each of the one or more cameras 108 , one or more lights 110 , one or more home devices 114 (washing machine, a dryer, a dishwasher, an oven, a stove, a microwave, a laptop, etc.), one or more sensors 112 , the drone 130 , and the security panel 126 to receive sensor data descriptive of events detected by the one or more cameras 108 , the one or more lights 110 , the done 130 , and the one or more home devices 114 in the residential facility 102 .
- each of the connected devices may connect via Wi-Fi, Bluetooth, or any other protocol used to communicate over network 106 to the control unit server 104 .
- control unit server 104 communicates over a long-range wired or wireless connection with a remote processing unit 136 over network 134 via one or more communication links.
- the remote processing unit 136 is located remote from the residential facility 102 , and manages the monitoring at the residential facility 102 , as well as other (and, perhaps, many more) monitoring systems located at different properties that are owned by different users.
- the remote processing unit 136 communicates bi-directionally with the control unit server 104 . Specifically, the remote processing unit 136 receives sensor data descriptive of events detected by the sensors included in the monitoring system of the residential facility 102 . Additionally, the remote processing unit 136 transmits instructions to the control unit server 104 for particular events.
- a user 118 may install a device to monitor the remote property 102 from the outside.
- the user 118 may install a drone 130 and a corresponding charging station 142 to monitor the activity occurring outside and inside the residential property 102 .
- the control unit server 104 may detect when the drone 130 has departed from the charging station 142 .
- the drone 130 may automatically depart from the charging station 142 at predetermined times set by the user 118 according to a signature profile.
- the drone 130 may fly a predetermined path 132 as set by the user according to a profile.
- the predetermined path 132 may be any path around the residential property 102 as described by the signature profile.
- the signature profile will be further explained below.
- the drone 130 will have a set of devices 131 for providing sensor data to the control unit 104 .
- the set of devices 131 may include a camera or an array of cameras, a GPS device, and an ultrasound transducer, to name a few examples.
- the drone 130 may instruct the set of devices 131 to record and monitor while the drone 130 flies the predetermined path 132 .
- user 118 may be in the residential facility 102 and can arm the residential facility 102 at any point in time. In doing so, the user 118 may turn off each of the one or more lights 110 , turn off each of the one or more home devices 114 , lock the front door 116 , and close and lock each of the one or more windows.
- the user 118 may interact with a client device 120 to activate a signature profile, such as “arming home” for the residential facility 102 .
- the user 118 may keep the one or more lights 110 on, keep the one or more home devices 114 on while setting the “arming home” profile.
- the client device 120 may display a web interface, an application, or a device specific application for a smart home system.
- the client device 120 can be, for example, a desktop computer, a laptop computer, a tablet computer, a wearable computer, a cellular phone, a smart phone, a music player, an e-book reader, a navigation system, a security panel, or any other appropriate computing device.
- the client device 120 may communicate with the control unit server 104 over the network 106 .
- the network 106 may be wired or wireless or a combination of both and can include the Internet.
- user 118 may communicate with the client device 120 to activate a signature profile for the residential facility 102 .
- user 118 may first instruct the control unit server 104 to set a signature profile associated with arming the residential facility 102 .
- user 118 may use a voice command to say “Smart Home, arm house,” to the client device 120 .
- the voice command may include a phrase, such as “Smart Home” to trigger the client device 120 to actively listen to a command following the phrase.
- the phrase “Smart Home” may be a predefined user configured term to communicate with the client device 120 .
- the client device 120 can send the voice command to the control unit server 104 over the network 106 .
- control unit server 104 may notify the remote processing unit 136 that the residential facility 102 is to be armed.
- control unit 104 may set associated parameters in response to receiving the voice command.
- control unit 104 can send back a confirmation to the client device 120 in response to arming the residential facility 102 and setting the associated parameters. For example, the control unit server 104 may transmit a response to the client device 120 that reads “Smart Home armed.”
- the user 118 and others may define and store signature profiles in the control unit server 104 .
- the user 118 and others may define and store signature profiles in the remote processing unit 136 .
- the signature profile may be associated with each user and allow for various use cases of the devices in the residential facility 102 .
- Each of the signature profiles can be associated with one user, such as user 118 or user 124 .
- a user 118 may create a signature profile for arming the residential facility 102 .
- a user 122 may create a signature profile for monitoring the residential facility 102 with a drone 130 for monitoring the residential facility 102 .
- user 122 may store one or more parameters associated with a use case in his or her signature profile.
- the one or more parameters for each use case may describe a volume level in decibels (dB) of the speakers 108 , an aperture amount for the cameras 110 , a brightness intensity level of the lights 112 , turning on home devices 117 such as television, laptop, one or more fans, setting a specific temperature of a thermometer, opening or closing the shades of a window a particular amount, alarm settings corresponding to the security panel 126 , defining a predetermined path and a length of time for the drone 130 to monitor the residential facility 102 , and any other parameters to describe the use case.
- dB decibels
- user 122 may create a signature profile with a use case for “arm home”.
- the user 122 may define a volume level of 0 dB for the speakers 108 , an aperture of f/16 for the one or more cameras 110 , zero lumens for the one or more lights 112 , turning off a television, turning off a laptop, turning on fans, setting the thermometer to 67 degrees Fahrenheit, fully closing the blinds of the one or more windows, and setting the security panel 126 to notify the remote processing unit 136 for any detected alarms.
- the user 118 may define a predetermined path 132 for the drone 130 to monitor around the residential facility 102 .
- the predetermined path 132 may be drawn by the user 118 through interaction with the smart home application on the client device 120 .
- the user 118 may additionally define the height and speed in which the drone 130 flies around the residential property 102 .
- the user 118 may draw a circle on a map provided by the smart home application on the client device 120 , set the altitude to 10 feet, and set the drone 130 ′s flying speed to 15 miles per hour.
- the user 118 can define a period of time for the drone to monitor the residential property 102 .
- the user 118 may enter the time of 1 hour into the smart home application on the client device 120 .
- the user 118 can instruct the drone to return to the charging station 142 or to traverse a new predetermined path around residential property 102 , different from predetermined path 132 .
- control unit server 104 sets the parameters for the signature profile when the user 122 speaks “Smart home, arming the home” to client device 120 .
- the control unit server 104 saves the parameters in memory defined by the user 118 in the smart home application on the client device 120 in response to the user setting the parameters.
- control unit server 104 may transmit the set parameters for the signature profile to the remote processing unit 136 to save for backup purposes.
- control unit server 104 may increase the sensitivity corresponding to each of the one or more sensors 114 for the “arming the home” use case. Specifically, control unit server 104 may increase the sensitivity for the front door sensor, the garage door sensor, and the lock sensor by a predetermined factor so that smaller movements of the front door or garage door trigger an alarm event. For example, the sensitivity may be increased by a factor of five.
- control unit server 104 may send a response to display a message on the client device 120 that says “Smart Home, home armed” once the control unit server 104 sets the parameters.
- the control unit server 104 may also transmit the same response to the display 128 of security panel 126 once the control unit server 104 sets the parameters.
- the control unit server 104 may transmit a message to the remote processing unit 126 that the residential facility 102 finished arming.
- the drone 130 ′s set of devices 131 may seek to detect the health of one or more individuals inside the residential facility 101 .
- the set of devices 131 may gather information on the health of the one or more individuals inside the residential facility 102 .
- the drone 130 scans areas external and internal to the residential facility 102 .
- the drone 130 may scan areas in proximity to the residential facility 102 , scan through the walls of the residential facility 102 to see the interior of the residential facility 102 , and monitor each level of the residential facility 102 .
- the drone 130 uses local machine learning algorithms along with ultrasound data, images, and GPS locational data captured by the set of devices 131 to detect one or more individuals in the residential facility 102 .
- the drone 130 may move closer to the individual to perform a more detailed scan.
- the drone 130 then sends the captured data to the control unit server 104 for further processing to determine the health of the one or more individuals.
- the control unit server 104 may also acquire sensor data from the cameras 108 , the lights 110 , the sensors 112 , and the home devices 114 in response to receiving the captured data from the drone 130 .
- the control unit server 104 provides the captured data and the sensor data to the remote processing unit 136 for further processing a determination of whether a first responder system 140 should be contacted.
- the user 118 sets the parameters for the “arming home” signature profile that includes a time for the drone to initiate monitoring the residential property 102 .
- the control unit server 104 sends an indication to the drone 130 via network 106 to initiate monitoring the residential facility 102 .
- the indication may include GPS coordinates of the predetermined path, the length of time to travel, and the altitude or varying altitude around the residential facility 102 in which to travel.
- the remote processing unit 136 may send an indication to the control unit server 104 to instruct the drone 130 to initiate the monitoring of the residential facility 102 .
- the drone 130 powers on, flies away from the charging station 142 , and flies the predetermined path 132 as set in the “arming home” signature profile.
- the drone 130 uses the set of sensors 131 to detect one or more individuals in the residential facility 102 .
- the control unit server 104 may use the cameras 108 , the lights 110 , the sensors 112 , and the home devices 114 in conjunction with the set of sensors 131 to detect one or more individuals in the residential facility 102 . For instance, as the drone 130 travels around the predetermined path 132 , the drone 130 may send GPS coordinate updates to the control unit server 104 . The control unit server 104 may turn on one or more of the lights 110 in one or more areas currently being viewed by the drone 130 to improve detectability. In addition, the control unit server 104 may increase sensitivity of one or more sensors 112 in the one or more areas currently being viewed by the drone 130 to also improve detectability.
- the control unit server 104 can transmit a GPS coordinate of the detected motion sensor to the drone 130 to focus the set of devices 131 on the area designated by the transmitted GPS coordinate.
- the GPS coordinate may be inside or outside the residential facility 102 .
- the drone 130 detects an individual in the residential facility 102 .
- the set of devices 131 captures data during the drone 130 ′s flight around the predetermined path 132 .
- the data includes camera images and GPS locational data.
- the drone 130 feeds the camera images and the GPS locational data to a local processing engine included in the drone 130 ′s memory.
- the local processing engine produces an indication that an individual has been detected in the camera images.
- the drone 130 moves closer to that individual to perform an ultrasound scan.
- the drone 130 may move closer to a window of the residential facility 102 or closer to a wall of the residential facility 102 to perform the ultrasound scan.
- the drone 130 may perform an ultrasound scan of the user 118 at different portions of the user 118 ′s body. For instance, the drone 130 may initiate scanning user 118 ′s head, then move to scan the user 118 ′s shoulder, and down to user 118 ′s feet. These ultrasound scans will be used later in constructing a mapped environment of the user 118 .
- the drone 130 detects another individual, such as user 124 , in the residential facility 102 .
- the drone 130 performs similar steps as described in stage (B) to detect user 124 .
- the local processing engine in the drone 130 produces an indication of a detected person.
- the local processing engine in the drone 130 may produce a recognition of a detected person. For instance, based on the training of the local processing engine, the local processing engine may produce an indication that a person has been detected or that the person detected is user 124 or Bob. This indication will be further described below.
- the drone 130 provides the captured drone data 133 to the control unit server 104 over the network 106 .
- the captured drone data 133 includes the captured images, the GPS locational data, and the indication provided by the local processing engine.
- the control unit server 104 receives the captured drone data 133 .
- the control unit server 104 combines the captured drone data 133 with data provided by the one or more cameras 108 , the one or more lights 110 , and the one or more sensors 112 .
- the control unit server 104 may package together the captured drone data 133 with images and video from the cameras 108 , a brightness level from the one or more lights 110 , and motion or contact data from the one or more sensors 112 when a detection was made by the drone 130 .
- control unit server 104 may include the data changes indicating the brightness level of the one or more lights 110 and the sensitivity changes of the one or more sensors 112 to improve detectability for the drone 130 .
- This change data may facilitate the remote processing unit 136 in determining typical paths of one or more individuals in the residential facility 102 . This can be used to update the predetermined path 132 of the drone 130 for improved tracking of individuals.
- the remote processing unit 136 receives the sensor data 135 .
- the remote processing unit 136 includes a remote processing engine to produce an indication of the health of the individual detected in the captured image.
- the remote processing engine of the remote processing unit 136 includes one or more machine learning algorithms that can produce an indication of an injury of the individual from the sensor data 135 .
- the injuries may include one or more broken bones, external bleeding, and burn marks, to name a few examples.
- the indication output by the remote processing unit 136 may include an image from the ultrasound data including the detected individual and a tagged description of the injury.
- the remote processing engine provides the image and the tagged description of the injury to a severity indicator.
- the severity indicator tags the input with a number indicating the severity of the individual's health in the attached image.
- the control unit server 104 may provide sensor data 135 of two detected individuals in residential facility 102 , user 118 and user 124 .
- the remote processing engine of the remote processing unit 136 may produce a severity indication of zero, corresponding to one or more images from the ultrasound data of user 118 .
- the severity indication of zero indicates that user 118 has no injury or appears to have no injury.
- the remote processing engine may produce a severity indication of ten, corresponding to one or more images from the ultrasound data of user 124 , indicating a severe injury.
- the remote processing engine may detect that user 124 has broken his arm, as illustrated by the images in the ultrasound data.
- the remote processing engine provides a notification to the owner of the residential facility 102 .
- the notification includes one or more images and the corresponding severity of an injury of an identified individual in each of the one or more images.
- the remote processing engine in the remote processing unit 136 provides the notification to the client device 120 of user 118 .
- the client device 120 may display the one or more images and the corresponding severity of the injury of the identified individual in each of the one or more images to the user 118 .
- the severity of the injury may include a number such as ten or display a message that recites “User Broke Arm” 122 , as illustrated in FIG. 1 .
- the user 118 may proceed to locate the injured individual, user 124 , to provide emergency assistance.
- the remote processing engine provides a notification to a first responder system 140 .
- the notification includes a reconstructed mapped environment of the images of the ultrasound scans and a corresponding severity indicator for each of the images.
- the reconstructed mapped environment may include an image converted from ultrasound of user 118 ′s head, user 118 ′s shoulders, user 118 ′s chest, and the remaining body sections down to user 118 ′s feet.
- Each of these ultrasound images reconstructed in the mapped environment may include a severity indicator.
- the severity indicator corresponding to the head of user 124 may be zero, the severity indicator corresponding to the shoulder of user 124 may be one, the severity indicator corresponding to the arms of user 124 may be ten, and the severity indicator corresponding to the legs of user 124 may be two.
- This reconstructed mapped environment is provided to the first responder system 140 to facilitate determining an injury of the user, such as user 124 .
- the first responder system 140 may be police officers, firefighters, paramedics, and emergency medical technicians, to name a few examples.
- FIG. 2 is a contextual diagram of an example system of a building destruction environment 200 for detecting one or more injured individuals.
- the building destruction environment 200 includes a demolished building 202 as a result of a natural disaster, such as an earthquake.
- the demolished building 202 includes one or more trapped individuals that may have life threatening injuries.
- the demolished building 202 includes user 204 lying down on the second floor of the demolished building 202 and user 206 lying under the rubble at the bottom of the demolished building 202 .
- a first responder such as a firefighter or a police officer, may let drone 208 fly around a path 210 around the demolished building 202 to find the one or more trapped individuals to detect their health status.
- FIG. 2 is similar to FIG. 1 without the inclusion of a control unit server 104 and one or more sensors at the demolished building 202 .
- the only data provided to the remote processing unit 226 includes data retrieved from the drone 208 itself.
- the drone 208 can scan along path 210 until retrieved by a first responder via a client device.
- stage (A′) which is similar to stage (A) of FIG. 1 , the drone 208 flies a path 210 to find one or more individuals trapped in the demolished building 202 .
- the path 210 may be preprogrammed by the first responder located at the scene of the building destruction environment 200 .
- the path 210 may be a random path taken by the drone 208 around the demolished building 202 .
- the drone 208 may fly the path 210 until a first responder retrieves the drone 208 .
- the drone 208 may fly the path 210 until the first responder or first responder system 230 receives an indication from the remote processing unit 226 indicating a location of the one or more individuals in the demolished building 202 and a corresponding health status of the located one or more individuals.
- the drone 208 detects user 204 and user 206 in the demolished building 202 , as illustrated by the arrows of detected person 212 .
- the drone 208 utilizes the camera and GPS device from the set of sensors onboard the drone 208 to detect user 204 and 206 in the demolished building 202 .
- the drone 208 utilizes a local processing engine that uses one or more machine learning algorithms to detect individuals from the captured images. Once the local processing engine identifies one or more individuals in the captured images, the local processing engine tags the individuals in the image with GPS locational data from the GPS device.
- the GPS locational data describes the locational position of the detected individual. For instance, the drone 208 calculates the locational position of the detected individual using the GPS locational position of the drone 208 , the altitude of the drone 208 , and an estimated distance between the drone 208 and the detected individual using slope estimation.
- the drone 208 moves closer to a detected individual to perform an ultrasound scan.
- the drone 208 may be programmed to move as close as possible to the detected individual, such as user 206 collapsed under the rubble.
- the drone 208 may perform a full body ultrasound scan to capture all features of user 206 .
- one or more portions of user 206 ′s body may be covered by rubble.
- the drone 208 may only perform scans on the exposed portion of user 206 ′s body.
- the drone 208 may move to the next detected individual, such as user 204 , to perform the ultrasound scan on user 204 .
- the drone 208 may receive an audible sound coming from the user 204 while performing the ultrasound scan. If the drone 208 determines the audible sound is greater than a threshold level, such as the user 204 is screaming or moaning in pain, the drone 208 can include an emergency request of the user 204 in danger in the data to provide to the remote processing unit 226 . In addition, the drone 208 can initiate communication with a first responder system 230 if the drone 208 determines the user 204 is in severe danger based on the audible sound being greater than the threshold level. Alternatively, the drone 208 can provide an indication to the user 204 to keep calm.
- a threshold level such as the user 204 is screaming or moaning in pain
- the drone 208 can include an emergency request of the user 204 in danger in the data to provide to the remote processing unit 226 .
- the drone 208 can initiate communication with a first responder system 230 if the drone 208 determines the user 204 is in severe danger based on the audible sound being greater
- the drone 208 can play a calming song or the drone 208 can play an audible message to the user 204 that recites “Please remain calm, help is on the way.” The drone 208 may recite other messages to the user 204 . Alternatively, the drone 208 may cease performing ultrasound scan if the drone 208 determines the user 204 is scared. Afterwards, the drone 208 may return to the path 210 to find any other individuals in the demolished building 202 .
- the drone 208 transmits data to the remote processing unit 226 .
- the data includes detected person data 216 , ultrasound data 218 , location data 220 , and detected image data 222 .
- the detected person data 216 includes information corresponding to the number of individuals detected during the drone 208 ′s scan on path 210 .
- the detected person data 216 may indicate that two individuals, user 204 and user 206 , were detected in the demolished building 202 .
- the ultrasound data 218 may include the ultrasound scans of the exposed body portions of user 204 and user 206 .
- the location data 220 may include the GPS locational data of user 204 and user 206 .
- the detected image data 222 may include the images from the drone 208 ′s camera that include the detected individuals and non-detected images. In some implementations, the images may include a tag indicating whether an individual is detected or not detected in that image.
- stage (F′) which is similar to stage (E) of FIG. 1 , the remote processing engine in the remote processing unit 226 processes the detected person data 216 , the ultrasound data 218 , the location data 220 , and the detected image data 222 to produce an indication of the health of the one or more detected individuals.
- the remote processing engine provides a notification 228 to the first responder system 230 .
- the notification includes a reconstructed mapped environment of the images of the ultrasound scans and a corresponding severity indicator for each of the images.
- a drone such as drone 208
- the drone 208 may or may not be programmed with a predetermined path 210 by a first responder.
- the drone 208 can be programmed to monitor an area that includes the vehicular accident. For example, the drone 208 can fly above the vehicular accident, near the windows of the vehicles involved in the accident, and low to the ground to search underneath the vehicles to determine whether an individual has been trapped underneath the vehicle.
- the drone 208 can perform steps similar to that of FIG. 1 and FIG. 2 to notify first responders if one or more injured individuals are found.
- drone 208 can fly a particular path around a search and rescue area in a forest to locate one or more lost individuals.
- the drone 208 may or may not be programmed with a predetermined path 210 by a first responder to fly through the forest searching for the lost individuals. If the drone 208 detects a lost individual, the drone 208 can perform steps similar to that of FIG. 1 and FIG. 2 to notify first responders and determine if the detected individual is injured.
- FIG. 3 is a contextual diagram of an example system 300 for training a neural network model for ultrasound analytics.
- the system 300 can train other types of machine learning models for ultrasound analytics, such as one or clustering models, one or more deep learning models, Bayesian learning models, or any other type of model.
- the system 300 illustrates the application of a neural network model in the local processing engine of the drone 130 and the application of a neural network model in the remote processing engine of the remote processing unit 136 .
- the data provided as input to the model in the local processing engine comes from the set of sensors 131 mounted on the drone 314 .
- the data provided as input to the model in the remote processing engine comes from an output of analyzing the sensor data processed by the local processing engine.
- the local processing engine in the drone 314 trains a neural network model while the drone 314 is offline.
- the neural network model may include an input layer, an output layer, and one or more hidden layers.
- the local processing engine may use a machine learning technique to continuously train the neural network model.
- the local processing engine trains its neural network model using one or more training techniques. For instance, the local processing engine may train the neural network model using images that include zero or more individuals and a tag as to whether or not an individual exists in the image.
- the local processing engine applies the neural network model once sufficiently trained.
- the local processing engine in the drone 314 applies images captured from the camera mounted on the drone 130 to the trained model 304 .
- the drone 314 sequentially inputs each image 302 A- 302 N to the trained model 304 at a predetermined time interval.
- the predetermined time interval may be the length of time it takes for the trained model 304 to process one image 302 C. In another instance, the predetermined time interval may be spaced by a time, such as 2 seconds.
- the trained model 304 produces an output for each image input to the trained model 304 .
- the output of the trained model 304 includes a detection or non-detection 306 and the input image 302 N.
- the detection or non-detection 306 includes an indication of whether a person is detected in the image 302 N. If a person is not detected in an image, such as image 302 N, the local processing engine tags the image as no individual detected. Alternatively, if the local processing engine indicates a detection in 306 , the image 302 N is provided as input to the location detection 310 .
- the local processing engine calculates the locational position of the detected individual using the GPS location position of the drone 314 , the altitude of the drone 314 , and an estimated distance between the drone 314 and the detected individual using slope estimation.
- the image 302 N is tagged with the locational position of the detected individual.
- the local processing engine instructs the drone 314 to perform an ultrasound scan at the locational position of the detected individual, such as user 316 , based on the determination that the image 302 N includes user 316 .
- the drone 314 moves in proximity to the location of the user 316 and performs ultrasound scans of the user 316 over different portions of the user 316 ′s body. For instance, the drone 314 may initiate scanning user 316 ′s head, then move to scan the user 316 ′s shoulders, and down to user 316 ′s feet to capture all features of user 316 . This ensures all parts of user 316 can be checked for a health status.
- the drone 314 After performing the ultrasound scans, the drone 314 provides the captured data to a remote processing unit 324 . As mentioned earlier in FIG. 2 , the drone 314 provides the detected person data 318 , the ultrasound data 320 , the location data 322 , and the detected image data 308 to the remote processing unit 324 . In some implementations, the drone 314 provides a new set of detected person data 318 , ultrasound data 320 , location data 322 , and detected image data 308 each time a new ultrasound scan is performed on a newly detected individual. In other implementations, the drone 314 provides a new set of data each time the drone 314 comes in contact with the charging station 142 .
- the drone 314 may be configured to only transmit data when connected to the charging station 142 to preserve battery life when monitoring the residential facility 102 .
- the remote processing unit 324 receives the detected person data 318 , the ultrasound data 320 , the location data 322 , and the detected image data 308 .
- the remote processing engine in the remote processing unit 324 processes each of the received data pieces.
- the remote processing engine provides the ultrasound data 320 to a reconstruction mechanism 328 .
- the reconstruction mechanism 328 converts each scan of ultrasound into an image 329 . For example, if the drone 314 performs ten ultrasound scans on user 316 , then the reconstruction mechanism 316 converts the ten ultrasound scans to ten corresponding images.
- the remote processing engine provides each image 329 converted from an ultrasound scan to a trained neural network model 330 .
- the trained model 330 is similar to trained model 304 .
- the trained model 330 may include an input layer, an output layer, and one or more hidden layers.
- the remote processing engine may use a machine learning technique to continuously train the neural network model to create the trained model 330 .
- the remote processing engine applies the trained model 330 once sufficiently trained.
- the remote processing engine in the remote processing unit 324 applies images 329 of the ultrasound data and the detected person data 318 to the trained model 330 .
- the trained model 330 is trained to produce an indication 331 of the health of the individual detected in the image from the captured ultrasound.
- the health of the individual 316 may include indicating whether the individual has sustained one or more broken bones, any external bleeding, or burn marks, to name a few examples.
- the remote processing engine may tag the input image 329 with the indication 331 .
- the remote processing engine may provide the tagged input image 329 with the indication 331 output from the trained model 330 to a severity indicator mechanism 332 .
- the severity indicator mechanism 332 analyzes the tagged description 331 to determine a severity indicator 333 of the individual in the image 329 .
- the severity indicator 333 indicates a number that indicates the severity of the individual's health according to the tagged description. For instance, if the tagged description indicated “external bleeding,” the severity indicator mechanism 332 may provide a severity indication of ten. In another instance, if the tagged description indicated “broken arm,” the severity indicator mechanism 332 may provide a severity indication of seven. This is because an external bleeding symptom may be more severe than a broken arm, depending on the severity of the external bleeding.
- the severity indicator mechanism 332 reconstructs a mapped environment 334 using the images converted from the ultrasound scans and the corresponding severity indicator for each of the images. For example, the severity indicator mechanism 332 reconstructs the mapped environment of the images of the ultrasound scan performed on user 316 .
- the reconstructed mapped environment 334 may include an image converted from ultrasound of user 316 ′s head, user 316 ′s shoulders, user 316 ′s chest, and the remaining body sections down to user 316 ′s feet.
- Each of these images reconstructed in the mapped environment may include a severity indicator 333 .
- the severity indicator mechanism 332 may designate a severity indicator of zero to the head of user 316 , a severity indicator of one corresponding to the shoulder of user 316 , a severity indicator of zero corresponding to the arms of user 316 , and a severity indicator of ten corresponding to the legs of user 316 .
- the remote processing engine provides the reconstructed map 334 to the first responder system 335 to facilitate in determining an injury of an identified user.
- the first responder system 335 can further train the trained model 330 . For instance, after the first responder system 335 receives the reconstructed map 334 , an individual, such as a medic, of the first responder system 335 may determine that the user 316 does not in fact have a broken leg, as determined by the trained model 330 . In response, the medic of the first responder system 335 can update one or more medical reports that the trained model 330 accesses to generate a reconstructed mapped environment 334 to reflect a change to the medical diagnosis of the leg of user 316 .
- an individual such as a medic
- the first responder system 335 may store the medical reports and transfer the medical records to the remote processing unit 226 .
- the remote processing engine may access the medical records for retraining the trained model 330 .
- the medical diagnosis in the medial reports indicates that the user 316 ′s leg is healthy.
- the trained model 330 can access the received updated reports and the corresponding image 329 used in the reconstructed mapped environment 334 to retrain the trained model 330 to identify that the leg of user 316 in the image 329 is not broken.
- the trained model 330 can be retrained with other medical diagnosis updates for user 316 and other users.
- FIG. 4 is a flowchart of an example process 400 for providing data corresponding to a detected individual for ultrasound analytics.
- the process 400 includes determining an indication of an individual in a frame of image data; determining a location of the identified individual in the frame of data using locational coordinates; obtaining ultrasound data of the identified individual in response to a drone's movement in proximity to the location of the identified individual to capture the ultrasound data; and, providing the identification of the individual, the location of the identified individual, the frame of image data, and the ultrasound data of the identified individual to a remote processing unit.
- the drone 130 determines an identification of an individual in a frame of image data.
- the drone 130 ′s set of devices 131 captures data during the drone 130 ′s flight around the predetermined path 132 .
- the data includes camera images and GPS locational data.
- the drone 130 feeds the camera images and the GPS locational data to a local processing engine included in the drone 130 ′s memory.
- the local processing engine produces an indication that an individual has been detected in the camera images.
- the local processing engine in the drone 340 applies images captured from the camera mounted on the drone 130 to a trained neural network model 304 .
- the trained neural network model 304 produces an output for each image that indicates a detection of a person or a non-detection of a person in the image.
- the local processing engine determines a location of the identified individual in the frame of data using locational coordinates. In some implementations, the local processing engine calculates the locational position of the detected individual using the GPS location position of the drone 314 , the altitude of the drone 314 , and an estimated distance between the drone 314 and the detected individual using slope estimation.
- the image 302 N is tagged with the locational position of the detected individual.
- the local processing engine obtains ultrasound data of the identified individual in response to drone 130 ′s movement in proximity to the location of the identified individual to capture the ultrasound data.
- the local processing engine instructs the drone 314 to perform an ultrasound scan at the locational position of the detected individual, such as user 316 , based on the determination the image 302 N detects the user 316 .
- the drone 314 moves in proximity to the position of the user 316 and performs ultrasound scans of the user 316 over different portions of the user 316 ′s body. For instance, the drone 314 may initiate scanning user 316 ′s head, then move to scan the user 316 ′s shoulders, and proceed down to user 316 ′s feet to capture all features of user 316 . This ensures all parts of user 316 can be checked for a health status.
- the local processing engine provides the identification of the individual, the location of the identified individual, the frame of image data, and the ultrasound data of the identified individual to a remote processing unit.
- the drone 314 transmits the detected person data 318 , the ultrasound data 320 , the location data 322 , and the detected image data 308 to the remote processing unit 324 .
- the drone 314 provides a new set of detected person data 318 , ultrasound data 320 , location data 322 , and detected image data 308 each time a new ultrasound scan is performed on a newly detected individual.
- the detected person data 318 includes information corresponding to the number of individuals detected during the drone 314 ′s scan on path.
- the location data 322 may include the GPS locational data of user 316 .
- the detected image data 308 may include the images from the drone 314 ′s camera that include the detected individuals and non-detected images.
- the images may include a tag indicating whether an individual is detected or not detected in that image.
- FIG. 5 is a flowchart of an example 500 for processing data corresponding to a detected individual for ultrasound analytics.
- the process 500 includes obtaining an identification of an individual, a location of the identified individual, a frame of image data, and ultrasound data of the identified individual from a drone; generate an ultrasound image from obtained ultrasound data; determine whether the ultrasound image includes the identified individual as having an injury; generate a severity indicator corresponding to each of the ultrasound images; generate a mapped environment that includes the ultrasound images stitched together that includes the corresponding severity indicator for each of the ultrasound images; and, providing the mapped environment to a first responder system.
- the remote processing engine obtains an identification of an individual, a location of the identified individual, a frame of image data, and ultrasound data of the identified individual from a drone 130 .
- the remote processing unit 324 receives the detected person data 318 , the ultrasound data 320 , the location data 322 , and the detected image data 308 .
- the remote processing engine in the remote processing unit 324 processes each of the received data items.
- the remote processing engine generates an ultrasound image from the obtained ultrasound data.
- the remote processing engine provides the ultrasound data 320 to a reconstruction mechanism 328 .
- the reconstruction mechanism 328 may convert each scan of ultrasound into an image 329 . For example, if the drone 314 performs ten ultrasound scans on user 316 , then the reconstruction mechanism 316 converts the ten ultrasound scans to ten corresponding images.
- the remote processing engine determines whether the ultrasound image includes the identified individual as having an injury.
- the remote processing engine provides each image converted from an ultrasound scan to a trained neural network model 330 .
- the trained model 330 is trained to produce an indication 331 of the health of the individual detected in the image from the captured ultrasound.
- the health of the individual 316 may include an indication of whether the individual has sustained one or more broken bones, any external bleeding, or burn marks, to name a few examples.
- the remote processing engine may tag the input image 329 with the indication 331 .
- the remote processing engine generates a severity indicator corresponding to each of the ultrasound images.
- the remote processing engine may provide the tagged input image 329 with the indication 331 output from the trained model 330 to a severity indicator mechanism 332 .
- the severity indicator mechanism 332 analyzes the tagged description 331 to determine a severity indicator 333 of the individual in the image 329 .
- the severity indicator 333 indicates a number that indicates the severity of the individual's health according to the tagged description. For instance, if the tagged description indicated “external bleeding,” the severity indicator mechanism 332 may provide a severity indication of ten. In another instance, if the tagged description indicated “broken arm,” the severity indicator mechanism 332 may provide a severity indication of seven. This is because an external bleeding symptom may be more severe than a broken arm, depending on the severity of the external bleeding.
- the remote processing engine generates a mapped environment that includes the ultrasound images stitched together that includes the corresponding severity indicator for each of the ultrasound images.
- the severity indicator mechanism 332 reconstructs a mapped environment 334 using the images converted from the ultrasound scans and the corresponding severity indicator for each of the images. For example, the severity indicator mechanism 332 reconstructs the mapped environment of the images of the ultrasound scan performed on user 316 .
- the reconstructed mapped environment 334 may include an image converted from ultrasound of user 316 ′s head, user 316 ′s shoulders, user 316 ′s chest, and the remaining body sections down to user 316 ′s feet.
- Each of these images reconstructed in the mapped environment may include a severity indicator 333 .
- the severity indicator mechanism 332 may designate a severity indicator of zero to the head of user 316 , a severity indicator of one corresponding to the shoulder of user 316 , a severity indicator of zero corresponding to the arms of user 316 , and a severity indicator of ten corresponding to the legs of user 316 .
- the remote processing engine provides the mapped environment to a first responder system.
- providing the reconstructed mapped environment 334 to the first responder system 335 facilitates in determining an injury of an identified user.
- FIG. 6 is a block diagram of an example integrated security environment 600 for ultrasound analytics that may utilize various components.
- the electronic system 600 includes a network 605 , a control unit 610 , one or more user devices 640 and 650 , a monitoring application server 660 , and a central alarm station server 670 .
- the network 605 facilitates communications between the control unit 610 , the one or more user devices 640 and 650 , the monitoring application server 660 , and the central alarm station server 670 .
- the network 605 is configured to enable exchange of electronic communications between devices connected to the network 605 .
- the network 605 may be configured to enable exchange of electronic communications between the control unit 610 , the one or more user devices 640 and 650 , the monitoring application server 660 , and the central alarm station server 670 .
- the network 605 may include, for example, one or more of the Internet, Wide Area Networks (WANs), Local Area Networks (LANs), analog or digital wired and wireless telephone networks (e.g., a public switched telephone network (PSTN), Integrated Services Digital Network (ISDN), a cellular network, and Digital Subscriber Line (DSL)), radio, television, cable, satellite, or any other delivery or tunneling mechanism for carrying data.
- PSTN public switched telephone network
- ISDN Integrated Services Digital Network
- DSL Digital Subscriber Line
- Network 605 may include multiple networks or subnetworks, each of which may include, for example, a wired or wireless data pathway.
- the network 605 may include a circuit-switched network, a packet-switched data network, or any other network able to carry electronic communications (e.g., data or voice communications).
- the network 605 may include networks based on the Internet protocol (IP), asynchronous transfer mode (ATM), the PSTN, packet-switched networks based on IP, X.25, or Frame Relay, or other comparable technologies and may support voice using, for example, VoIP, or other comparable protocols used for voice communications.
- IP Internet protocol
- ATM asynchronous transfer mode
- the network 605 may include one or more networks that include wireless data channels and wireless voice channels.
- the network 605 may be a wireless network, a broadband network, or a combination of networks including a wireless network and a broadband network.
- the control unit 610 includes a controller 612 and a network module 614 .
- the controller 612 is configured to control a control unit monitoring system (e.g., a control unit system) that includes the control unit 610 .
- the controller 612 may include a processor or other control circuitry configured to execute instructions of a program that controls operation of a control unit system.
- the controller 612 may be configured to receive input from sensors, flow meters, or other devices included in the control unit system and control operations of devices included in the household (e.g., speakers, lights, doors, etc.).
- the controller 612 may be configured to control operation of the network module 614 included in the control unit 610 .
- the network module 614 is a communication device configured to exchange communications over the network 605 .
- the network module 614 may be a wireless communication module configured to exchange wireless communications over the network 605 .
- the network module 614 may be a wireless communication device configured to exchange communications over a wireless data channel and a wireless voice channel.
- the network module 614 may transmit alarm data over a wireless data channel and establish a two-way voice communication session over a wireless voice channel.
- the wireless communication device may include one or more of a LTE module, a GSM module, a radio modem, cellular transmission module, or any type of module configured to exchange communications in one of the following formats: LTE, GSM or GPRS, CDMA, EDGE or EGPRS, EV-DO or EVDO, UMTS, or IP.
- the network module 614 also may be a wired communication module configured to exchange communications over the network 605 using a wired connection.
- the network module 614 may be a modem, a network interface card, or another type of network interface device.
- the network module 614 may be an Ethernet network card configured to enable the control unit 610 to communicate over a local area network and/or the Internet.
- the network module 614 also may be a voiceband modem configured to enable the alarm panel to communicate over the telephone lines of Plain Old Telephone Systems (POTS).
- POTS Plain Old Telephone Systems
- the control unit system that includes the control unit 610 includes one or more sensors.
- the monitoring system may include multiple sensors 620 .
- the sensors 620 may include a lock sensor, a contact sensor, a motion sensor, or any other type of sensor included in a control unit system.
- the sensors 620 also may include an environmental sensor, such as a temperature sensor, a water sensor, a rain sensor, a wind sensor, a light sensor, a smoke detector, a carbon monoxide detector, an air quality sensor, etc.
- the sensors 620 further may include a health monitoring sensor, such as a prescription bottle sensor that monitors taking of prescriptions, a blood pressure sensor, a blood sugar sensor, a bed mat configured to sense presence of liquid (e.g., bodily fluids) on the bed mat, etc.
- the sensors 620 may include a radio-frequency identification (RFID) sensor that identifies a particular article that includes a pre-assigned RFID tag.
- RFID radio-frequency identification
- the control unit 610 communicates with the module 622 and the camera 630 to perform monitoring.
- the module 622 is connected to one or more devices that enable home automation control.
- the module 622 may be connected to one or more lighting systems and may be configured to control operation of the one or more lighting systems.
- the module 622 may be connected to one or more electronic locks at the property and may be configured to control operation of the one or more electronic locks (e.g., control Z-Wave locks using wireless communications in the Z-Wave protocol.
- the module 622 may be connected to one or more appliances at the property and may be configured to control operation of the one or more appliances.
- the module 622 may include multiple modules that are each specific to the type of device being controlled in an automated manner.
- the module 622 may control the one or more devices based on commands received from the control unit 610 . For instance, the module 622 may cause a lighting system to illuminate an area to provide a better image of the area when captured by a camera 630 .
- the camera 630 may be a video/photographic camera or other type of optical sensing device configured to capture images.
- the camera 630 may be configured to capture images of an area within a building or within a residential facility 102 monitored by the control unit 610 .
- the camera 630 may be configured to capture single, static images of the area and also video images of the area in which multiple images of the area are captured at a relatively high frequency (e.g., thirty images per second).
- the camera 630 may be controlled based on commands received from the control unit 610 .
- the camera 630 may be triggered by several different types of techniques. For instance, a Passive Infra-Red (PIR) motion sensor may be built into the camera 630 and used to trigger the camera 630 to capture one or more images when motion is detected.
- the camera 630 also may include a microwave motion sensor built into the camera and used to trigger the camera 630 to capture one or more images when motion is detected.
- the camera 630 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors 620 , PIR, door/window, etc.) detect motion or other events.
- the camera 630 receives a command to capture an image when external devices detect motion or another potential alarm event.
- the camera 630 may receive the command from the controller 612 or directly from one of the sensors 620 .
- the camera 630 triggers integrated or external illuminators (e.g., Infra-Red, Z-wave controlled “white” lights, lights controlled by the module 622 , etc.) to improve image quality when the scene is dark.
- integrated or external illuminators e.g., Infra-Red, Z-wave controlled “white” lights, lights controlled by the module 622 , etc.
- An integrated or separate light sensor may be used to determine if illumination is desired and may result in increased image quality.
- the camera 630 may be programmed with any combination of time/day schedules, system “arming state”, or other variables to determine whether images should be captured or not when triggers occur.
- the camera 630 may enter a low-power mode when not capturing images. In this case, the camera 630 may wake periodically to check for inbound messages from the controller 612 .
- the camera 630 may be powered by internal, replaceable batteries if located remotely from the control unit 610 .
- the camera 630 may employ a small solar cell to recharge the battery when light is available.
- the camera 630 may be powered by the controller's 612 power supply if the camera 630 is co-located with the controller 612 .
- the camera 630 communicates directly with the monitoring application server 660 over the Internet. In these implementations, image data captured by the camera 630 does not pass through the control unit 610 and the camera 630 receives commands related to operation from the monitoring application server 660 .
- the system 600 also includes thermostat 634 to perform dynamic environmental control at the property.
- the thermostat 634 is configured to monitor temperature and/or energy consumption of an HVAC system associated with the thermostat 634 , and is further configured to provide control of environmental (e.g., temperature) settings.
- the thermostat 634 can additionally or alternatively receive data relating to activity at a property and/or environmental data at a property, e.g., at various locations indoors and outdoors at the property.
- the thermostat 634 can directly measure energy consumption of the HVAC system associated with the thermostat, or can estimate energy consumption of the HVAC system associated with the thermostat 634 , for example, based on detected usage of one or more components of the HVAC system associated with the thermostat 634 .
- the thermostat 634 can communicate temperature and/or energy monitoring information to or from the control unit 610 and can control the environmental (e.g., temperature) settings based on commands received from the control unit 610 .
- the thermostat 634 is a dynamically programmable thermostat and can be integrated with the control unit 610 .
- the dynamically programmable thermostat 634 can include the control unit 610 , e.g., as an internal component to the dynamically programmable thermostat 634 .
- the control unit 610 can be a gateway device that communicates with the dynamically programmable thermostat 634 .
- a module 637 is connected to one or more components of an HVAC system associated with a property, and is configured to control operation of the one or more components of the HVAC system.
- the module 637 is also configured to monitor energy consumption of the HVAC system components, for example, by directly measuring the energy consumption of the HVAC system components or by estimating the energy usage of the one or more HVAC system components based on detecting usage of components of the HVAC system.
- the module 637 can communicate energy monitoring information and the state of the HVAC system components to the thermostat 634 and can control the one or more components of the HVAC system based on commands received from the thermostat 634 .
- the system 600 further includes one or more robotic devices.
- the robotic devices may be any type of robots that are capable of moving and taking actions that assist in security monitoring.
- the robotic devices may include drones that are capable of moving throughout a property based on automated control technology and/or user input control provided by a user.
- the drones may be able to fly, roll, walk, or otherwise move about the property.
- the drones may include helicopter type devices (e.g., quad copters), rolling helicopter type devices (e.g., roller copter devices that can fly and also roll along the ground, walls, or ceiling) and land vehicle type devices (e.g., automated cars that drive around a property).
- the robotic devices may be robotic devices that are intended for other purposes and merely associated with the system 600 for use in appropriate circumstances.
- a robotic vacuum cleaner device may be associated with the monitoring system 600 as one of the robotic devices and may be controlled to take action responsive to monitoring system events.
- the robotic devices automatically navigate within a property.
- the robotic devices include sensors and control processors that guide movement of the robotic devices within the property.
- the robotic devices may navigate within the property using one or more cameras, one or more proximity sensors, one or more gyroscopes, one or more accelerometers, one or more magnetometers, a global positioning system (GPS) unit, an altimeter, one or more sonar or laser sensors, and/or any other types of sensors that aid in navigation about a space.
- the robotic devices may include control processors that process output from the various sensors and control the robotic devices to move along a path that reaches the desired destination and avoids obstacles. In this regard, the control processors detect walls or other obstacles in the property and guide movement of the robotic devices in a manner that avoids the walls and other obstacles.
- the robotic devices may store data that describes attributes of the property.
- the robotic devices may store a floorplan and/or a three-dimensional model of the property that enables the robotic devices to navigate the property.
- the robotic devices may receive the data describing attributes of the property, determine a frame of reference to the data (e.g., a home or reference location in the property), and navigate the property based on the frame of reference and the data describing attributes of the property.
- initial configuration of the robotic devices also may include learning of one or more navigation patterns in which a user provides input to control the robotic devices to perform a specific navigation action (e.g., fly to an upstairs bedroom and spin around while capturing video and then return to a home charging base).
- the robotic devices may learn and store the navigation patterns such that the robotic devices may automatically repeat the specific navigation actions upon a later request.
- the robotic devices may include data capture and recording devices.
- the robotic devices may include one or more cameras, one or more motion sensors, one or more microphones, one or more biometric data collection tools, one or more temperature sensors, one or more humidity sensors, one or more air flow sensors, and/or any other types of sensors that may be useful in capturing monitoring data related to the property and users in the property.
- the one or more biometric data collection tools may be configured to collect biometric samples of a person in the home with or without contact of the person.
- the biometric data collection tools may include a fingerprint scanner, a hair sample collection tool, a skin cell collection tool, and/or any other tool that allows the robotic devices to take and store a biometric sample that can be used to identify the person (e.g., a biometric sample with DNA that can be used for DNA testing).
- the robotic devices may include output devices.
- the robotic devices may include one or more displays, one or more speakers, and/or any type of output devices that allow the robotic devices to communicate information to a nearby user.
- the robotic devices also may include a communication module that enables the robotic devices to communicate with the control unit 610 , each other, and/or other devices.
- the communication module may be a wireless communication module that allows the robotic devices to communicate wirelessly.
- the communication module may be a Wi-Fi module that enables the robotic devices to communicate over a local wireless network at the property.
- the communication module further may be a 900 MHz wireless communication module that enables the robotic devices to communicate directly with the control unit 610 .
- Other types of short-range wireless communication protocols such as Bluetooth, Bluetooth LE, Zwave, Zigbee, etc., may be used to allow the robotic devices to communicate with other devices in the property.
- the robotic devices further may include processor and storage capabilities.
- the robotic devices may include any suitable processing devices that enable the robotic devices to operate applications and perform the actions described throughout this disclosure.
- the robotic devices may include solid state electronic storage that enables the robotic devices to store applications, configuration data, collected sensor data, and/or any other type of information available to the robotic devices.
- the robotic devices are associated with one or more charging stations.
- the charging stations may be located at predefined home base or reference locations in the property.
- the robotic devices may be configured to navigate to the charging stations after completion of tasks needed to be performed for the monitoring system 600 . For instance, after completion of a monitoring operation or upon instruction by the control unit 610 , the robotic devices may be configured to automatically fly to and land on one of the charging stations. In this regard, the robotic devices may automatically maintain a fully charged battery in a state in which the robotic devices are ready for use by the monitoring system 600 .
- the charging stations may be contact based charging stations and/or wireless charging stations.
- the robotic devices may have readily accessible points of contact that the robotic devices are capable of positioning and mating with a corresponding contact on the charging station.
- a helicopter type robotic device may have an electronic contact on a portion of its landing gear that rests on and mates with an electronic pad of a charging station when the helicopter type robotic device lands on the charging station.
- the electronic contact on the robotic device may include a cover that opens to expose the electronic contact when the robotic device is charging and closes to cover and insulate the electronic contact when the robotic device is in operation.
- the robotic devices may charge through a wireless exchange of power.
- the robotic devices need only locate themselves closely enough to the wireless charging stations for the wireless exchange of power to occur.
- the positioning needed to land at a predefined home base or reference location in the property may be less precise than with a contact based charging station.
- the wireless charging station Based on the robotic devices landing at a wireless charging station, the wireless charging station outputs a wireless signal that the robotic devices receive and convert to a power signal that charges a battery maintained on the robotic devices.
- each of the robotic devices has a corresponding and assigned charging station such that the number of robotic devices equals the number of charging stations.
- the robotic devices always navigate to the specific charging station assigned to that robotic device. For instance, a first robotic device may always use a first charging station and a second robotic device may always use a second charging station.
- the robotic devices may share charging stations.
- the robotic devices may use one or more community charging stations that are capable of charging multiple robotic devices.
- the community charging station may be configured to charge multiple robotic devices in parallel.
- the community charging station may be configured to charge multiple robotic devices in serial such that the multiple robotic devices take turns charging and, when fully charged, return to a predefined home base or reference location in the property that is not associated with a charger.
- the number of community charging stations may be less than the number of robotic devices.
- the charging stations may not be assigned to specific robotic devices and may be capable of charging any of the robotic devices.
- the robotic devices may use any suitable, unoccupied charging station when not in use. For instance, when one of the robotic devices has completed an operation or is in need of battery charge, the control unit 610 references a stored table of the occupancy status of each charging station and instructs the robotic device to navigate to the nearest charging station that is unoccupied.
- the system 600 further includes one or more integrated security devices 680 .
- the one or more integrated security devices may include any type of device used to provide alerts based on received sensor data.
- the one or more control units 610 may provide one or more alerts to the one or more integrated security input/output devices.
- the one or more control units 610 may receive one or more sensor data from the sensors 620 and determine whether to provide an alert to the one or more integrated security input/output devices 680 .
- the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the integrated security devices 680 communicate with the controller 612 over communication links 624 , 626 , 628 , 632 , 684 , and 686 .
- the communication links 624 , 626 , 628 , 632 , 684 , and 686 may be a wired or wireless data pathway configured to transmit signals from the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the integrated security devices 680 to the controller 612 .
- the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the integrated security devices 680 may continuously transmit sensed values to the controller 612 , periodically transmit sensed values to the controller 612 , or transmit sensed values to the controller 612 in response to a change in a sensed value.
- the communication links 624 , 626 , 628 , 632 , 684 , and 686 may include a local network.
- the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the integrated security devices 680 , and the controller 612 may exchange data and commands over the local network.
- the local network may include 802.11 “Wi-Fi” wireless Ethernet (e.g., using low-power Wi-Fi chipsets), Z-Wave, Zigbee, Bluetooth, “Homeplug” or other “Powerline” networks that operate over AC wiring, and a Category 5 (CATS) or Category 6 (CAT6) wired Ethernet network.
- the local network may be a mesh network constructed based on the devices connected to the mesh network.
- the monitoring application server 660 is an electronic device configured to provide monitoring services by exchanging electronic communications with the control unit 610 , the one or more user devices 640 and 650 , and the central alarm station server 670 over the network 605 .
- the monitoring application server 660 may be configured to monitor events (e.g., alarm events) generated by the control unit 610 .
- the monitoring application server 660 may exchange electronic communications with the network module 614 included in the control unit 610 to receive information regarding events (e.g., alerts) detected by the control unit server 104 a .
- the monitoring application server 660 also may receive information regarding events (e.g., alerts) from the one or more user devices 640 and 650 .
- the monitoring application server 660 may route alert data received from the network module 614 or the one or more user devices 640 and 650 to the central alarm station server 670 .
- the monitoring application server 660 may transmit the alert data to the central alarm station server 670 over the network 605 .
- the monitoring application server 660 may store sensor and image data received from the monitoring system and perform analysis of sensor and image data received from the monitoring system. Based on the analysis, the monitoring application server 660 may communicate with and control aspects of the control unit 610 or the one or more user devices 640 and 650 .
- the central alarm station server 670 is an electronic device configured to provide alarm monitoring service by exchanging communications with the control unit 610 , the one or more mobile devices 640 and 650 , and the monitoring application server 660 over the network 605 .
- the central alarm station server 670 may be configured to monitor alerting events generated by the control unit 610 .
- the central alarm station server 670 may exchange communications with the network module 614 included in the control unit 610 to receive information regarding alerting events detected by the control unit 610 .
- the central alarm station server 670 also may receive information regarding alerting events from the one or more mobile devices 640 and 650 and/or the monitoring application server 660 .
- the central alarm station server 670 is connected to multiple terminals 672 and 674 .
- the terminals 672 and 674 may be used by operators to process alerting events.
- the central alarm station server 670 may route alerting data to the terminals 672 and 674 to enable an operator to process the alerting data.
- the terminals 672 and 674 may include general-purpose computers (e.g., desktop personal computers, workstations, or laptop computers) that are configured to receive alerting data from a server in the central alarm station server 670 and render a display of information based on the alerting data.
- the controller 612 may control the network module 614 to transmit, to the central alarm station server 670 , alerting data indicating that a sensor 620 detected motion from a motion sensor via the sensors 620 .
- the central alarm station server 670 may receive the alerting data and route the alerting data to the terminal 672 for processing by an operator associated with the terminal 672 .
- the terminal 672 may render a display to the operator that includes information associated with the alerting event (e.g., the lock sensor data, the motion sensor data, the contact sensor data, etc.) and the operator may handle the alerting event based on the displayed information.
- the terminals 672 and 674 may be mobile devices or devices designed for a specific function.
- FIG. 6 illustrates two terminals for brevity, actual implementations may include more (and, perhaps, many more) terminals.
- the one or more user devices 640 and 650 are devices that host and display user interfaces.
- the user device 640 is a mobile device that hosts one or more native applications (e.g., the smart home application 642 ).
- the user device 640 may be a cellular phone or a non-cellular locally networked device with a display.
- the user device 640 may include a cell phone, a smart phone, a tablet PC, a personal digital assistant (“PDA”), or any other portable device configured to communicate over a network and display information.
- PDA personal digital assistant
- implementations may also include Blackberry-type devices (e.g., as provided by Research in Motion), electronic organizers, iPhone-type devices (e.g., as provided by Apple), iPod devices (e.g., as provided by Apple) or other portable music players, other communication devices, and handheld or portable electronic devices for gaming, communications, and/or data organization.
- the user device 640 may perform functions unrelated to the monitoring system, such as placing personal telephone calls, playing music, playing video, displaying pictures, browsing the Internet, maintaining an electronic calendar, etc.
- the user device 640 includes a smart home application 642 .
- the smart home application 642 refers to a software/firmware program running on the corresponding mobile device that enables the user interface and features described throughout.
- the user device 640 may load or install the smart home application 642 based on data received over a network or data received from local media.
- the smart home application 642 runs on mobile devices platforms, such as iPhone, iPod touch, Blackberry, Google Android, Windows Mobile, etc.
- the smart home application 642 enables the user device 640 to receive and process image and sensor data from the monitoring system.
- the user device 650 may be a general-purpose computer (e.g., a desktop personal computer, a workstation, or a laptop computer) that is configured to communicate with the monitoring application server 660 and/or the control unit 610 over the network 605 .
- the user device 650 may be configured to display a smart home user interface 652 that is generated by the user device 650 or generated by the monitoring application server 660 .
- the user device 650 may be configured to display a user interface (e.g., a web page) provided by the monitoring application server 660 that enables a user to perceive images captured by the camera 630 and/or reports related to the monitoring system.
- FIG. 6 illustrates two user devices for brevity, actual implementations may include more (and, perhaps, many more) or fewer user devices.
- the one or more user devices 640 and 650 communicate with and receive monitoring system data from the control unit 610 using the communication link 638 .
- the one or more user devices 640 and 650 may communicate with the control unit 610 using various local wireless protocols such as Wi-Fi, Bluetooth, Zwave, Zigbee, HomePlug (ethernet over powerline), or wired protocols such as Ethernet and USB, to connect the one or more user devices 640 and 650 to local security and automation equipment.
- the one or more user devices 640 and 650 may connect locally to the monitoring system and its sensors and other devices. The local connection may improve the speed of status and control communications because communicating through the network 605 with a remote server (e.g., the monitoring application server 660 ) may be significantly slower.
- a remote server e.g., the monitoring application server 660
- the one or more user devices 640 and 650 are shown as communicating with the control unit 610 , the one or more user devices 640 and 650 may communicate directly with the sensors and other devices controlled by the control unit 610 . In some implementations, the one or more user devices 640 and 650 replace the control unit 610 and perform the functions of the control unit 610 for local monitoring and long range/offsite communication.
- the one or more user devices 640 and 650 receive monitoring system data captured by the control unit 610 through the network 605 .
- the one or more user devices 640 , 650 may receive the data from the control unit 610 through the network 605 or the monitoring application server 660 may relay data received from the control unit 610 to the one or more user devices 640 and 650 through the network 605 .
- the monitoring application server 660 may facilitate communication between the one or more user devices 640 and 650 and the monitoring system.
- the one or more user devices 640 and 650 may be configured to switch whether the one or more user devices 640 and 650 communicate with the control unit 610 directly (e.g., through link 638 ) or through the monitoring application server 660 (e.g., through network 605 ) based on a location of the one or more user devices 640 and 650 . For instance, when the one or more user devices 640 and 650 are located close to the control unit 610 and in range to communicate directly with the control unit 610 , the one or more user devices 640 and 650 use direct communication. When the one or more user devices 640 and 650 are located far from the control unit 610 and not in range to communicate directly with the control unit 610 , the one or more user devices 640 and 650 use communication through the monitoring application server 660 .
- the one or more user devices 640 and 650 are shown as being connected to the network 605 , in some implementations, the one or more user devices 640 and 650 are not connected to the network 605 . In these implementations, the one or more user devices 640 and 650 communicate directly with one or more of the monitoring system components and no network (e.g., Internet) connection or reliance on remote servers is needed.
- no network e.g., Internet
- the one or more user devices 640 and 650 are used in conjunction with only local sensors and/or local devices in a house.
- the system 600 only includes the one or more user devices 640 and 650 , the sensors 620 , the module 622 , the camera 630 , and the robotic devices.
- the one or more user devices 640 and 650 receive data directly from the sensors 620 , the module 622 , the camera 630 , and the robotic devices and sends data directly to the sensors 620 , the module 622 , the camera 630 , and the robotic devices.
- the one or more user devices 640 , 650 provide the appropriate interfaces/processing to provide visual surveillance and reporting.
- system 600 further includes network 605 and the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices are configured to communicate sensor and image data to the one or more user devices 640 and 650 over network 605 (e.g., the Internet, cellular network, etc.).
- network 605 e.g., the Internet, cellular network, etc.
- the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices are intelligent enough to change the communication pathway from a direct local pathway when the one or more user devices 640 and 650 are in close physical proximity to the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices to a pathway over network 605 when the one or more user devices 640 and 650 are farther from the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices.
- the system leverages GPS information from the one or more user devices 640 and 650 to determine whether the one or more user devices 640 and 650 are close enough to the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices to use the direct local pathway or whether the one or more user devices 640 and 650 are far enough from the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices that the pathway over network 605 is required.
- the system leverages status communications (e.g., pinging) between the one or more user devices 640 and 650 and the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices to determine whether communication using the direct local pathway is possible. If communication using the direct local pathway is possible, the one or more user devices 640 and 650 communicate with the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices using the direct local pathway. If communication using the direct local pathway is not possible, the one or more user devices 640 and 650 communicate with the sensors 620 , the module 622 , the camera 630 , the thermostat 634 , and the robotic devices using the pathway over network 605 .
- status communications e.g., pinging
- the system 600 provides end users with access to images captured by the camera 630 to aid in decision making.
- the system 600 may transmit the images captured by the camera 630 over a wireless WAN network to the user devices 640 and 650 . Because transmission over a wireless WAN network may be relatively expensive, the system 600 uses several techniques to reduce costs while providing access to significant levels of useful visual information.
- a state of the monitoring system and other events sensed by the monitoring system may be used to enable/disable video/image recording devices (e.g., the camera 630 ).
- the camera 630 may be set to capture images on a periodic basis when the alarm system is armed in an “Away” state, but set not to capture images when the alarm system is armed in a “Stay” state or disarmed.
- the camera 630 may be triggered to begin capturing images when the alarm system detects an event, such as an alarm event, a door-opening event for a door that leads to an area within a field of view of the camera 630 , or motion in the area within the field of view of the camera 630 .
- the camera 630 may capture images continuously, but the captured images may be stored or transmitted over a network when needed.
- the described systems, methods, and techniques may be implemented in digital electronic circuitry, computer hardware, firmware, software, or in combinations of these elements. Apparatus implementing these techniques may include appropriate input and output devices, a computer processor, and a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor. A process implementing these techniques may be performed by a programmable processor executing a program of instructions to perform desired functions by operating on input data and generating appropriate output.
- the techniques may be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device.
- Each computer program may be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language may be a compiled or interpreted language.
- Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory.
- Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and Compact Disc Read-Only Memory (CD-ROM). Any of the foregoing may be supplemented by, or incorporated in, specially designed ASICs (application-specific integrated circuits).
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Physiology (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Aviation & Aerospace Engineering (AREA)
- Evolutionary Computation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Signal Processing (AREA)
- Dentistry (AREA)
- Psychiatry (AREA)
- Acoustics & Sound (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Biology (AREA)
Abstract
Description
- This application is a continuation of U.S. application Ser. No. 16/205,037, filed Nov. 29, 2018, which claims the benefit of U.S. Provisional Application No. 62/591,920 filed Nov. 29, 2017, and titled “Ultrasound Analytics for Actionable Information. The complete disclosures of all of the above patent applications are hereby incorporated by reference in their entirety for all purposes.
- This specification relates generally to integrated security technology, and in particular, to integrated security technology to provide actionable information to first responders using ultrasound data.
- Integrated security includes the use of security hardware in place on a property, such as a residential property and a commercial property. Typical uses of security at a particular property includes detecting intrusion, detecting unlocked doors, detecting when an individual is harmed at the property, and tripping one or more alarms.
- The subject matter of the present disclosure is related to systems and techniques for gathering information on the health of one or more individuals trapped in an accident to provide actionable information to a first responder system. The techniques may use ultrasound, camera images, GPS locational data, and machine learning algorithms to provide the actionable information to the first responder system. The machine learning algorithms may include algorithms such as one or more neural network models, Bayesian learning models, or any other type of machine learning technique, to detect injuries of the individuals trapped in the accident. In response to detecting injuries of the individuals trapped in the accident, the systems may transmit a notification to a first responder system and other individuals that may know the injured individual indicating the individual is trapped and injured. The benefit of providing the indication of the injured individual is such that other individuals related to the injured individual can be aware of the status of the injured individual in the case of an emergency, such as a fire, earthquake, or flood, to name a few examples. Additionally, by notifying the first responder system, the first responder system can take one or more steps to save the injured individuals when time is of the essence and the injured individual's life is in severe condition. The one or more steps may include pinpointing the location of the injured individual at a facility that has toppled due to a natural disaster when finding the injured individual is next to impossible with the human eye alone, notifying one or more other individuals of the injured individual's status and location, and determining the injury of the injured individual in an efficient manner to provide the correct care.
- In some implementations, the techniques may utilize a set of sensors including a camera (or an array of cameras), a Global Positioning System (GPS) device, and an ultrasound transducer. Each of the sensors may be co-located and mounted on one unit, such as a plane or drone. The sensors can communicate to a backend over a WiFi or cellular communication network. In some implementations, the backend is responsible for transforming the data provided by each of the cameras, the GPS device, and the ultrasound transducer into one or more various types of data and performing advanced analytics on the various types of data. In some implementations, the backend is responsible for aggregating the various types of data into an aggregated map that incorporates all usable information provided by the camera, the GPS device, and the ultrasound transducer. The backend may provide the aggregated map to a first responder system such that the first responder system can identify and prioritize providing actionable rescue teams for the identified individuals.
- In one general aspect, a method is performed by one or more computers of a monitoring system. The method includes generating, by one or more sensors of a monitoring system that is configured to monitor a property, first sensor data; based on the first sensor data, generating, by the monitoring system, an alarm event for the property; based on generating the alarm event for the property, dispatching, by the monitoring system, an autonomous drone; navigating, by the autonomous drone of the monitoring system, the property; generating, by the autonomous drone of the monitoring system, second sensor data; based on the second sensor data, determining, by the monitoring system, a location within the property where a person is likely located; and provide, for output by the monitoring system, data indicating the location within the property where the person is likely located.
- Other embodiments of this and other aspects of the disclosure include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. A system of one or more computers can be so configured by virtue of software, firmware, hardware, or a combination of them installed on the system that in operation cause the system to perform the actions. One or more computer programs can be so configured by virtue having instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- Implementations may include one or more of the following features. For example, in some implementations, based on navigating the property, generating, by the monitoring system, a map of the property; and providing, by the monitoring system, for output, data indicating the location within the property where the person is likely located by providing, for output, the map of the property with the location where the person is likely located.
- In some implementations, the method further includes determining, by the monitoring system, that the person is likely injured based on second sensor data; and providing, for output by the monitoring system, data indicating that the person is likely injured.
- In some implementations, the method further includes based on determining that the person is likely injured, generating, by the autonomous drone of the monitoring system, using an additional onboard sensor, third sensor data; based on the third sensor data, determining, by the monitoring system, a severity of the injury to the person; and providing, for output by the monitoring system, the data indicating that the person is likely injured by providing, for output, the data indicating that the person is likely injured and data indicating the severity of the injury to the person.
- In some implementations, the method further includes the onboard sensor is a camera and the second sensor data is image data, and the additional onboard sensor is an ultrasound sensor and the third sensor data is ultrasound data.
- In some implementations, the method further includes providing, by the monitoring system, second sensor data as an input to a model trained to identify locations of people; and determining, by the monitoring system, the location within the property where a person is likely located based on an output of the model trained to identify locations of people based on the second sensor data.
- In some implementations, the method further includes receiving, by the monitoring system, labeled training data that includes first labeled sensor data that corresponds locations with people and second labeled sensor data that corresponds to locations without people; and training, by the monitoring system, using machine learning, the first labeled sensor data, and the second labeled sensor data, the model to identify locations of people based on the second sensor data.
- In some implementations, the method further includes based on the second sensor data, determining, by the monitoring system, that the person is likely alive; and providing, by the monitoring system, for output, data indicating that the person is likely alive.
- In some implementations, the method further includes the second sensor is a microphone and the second sensor data is audio data, and the method includes providing, by the monitoring system, the audio data as an input to a model trained to identify human sounds; and determining, by the monitoring system, that the person is likely alive based on an output of the model trained to identify human sounds.
- In some implementations, the method further includes based on determining a location within the property where a person is likely located, activating, by the monitoring system, a communication channel between a device outside the property and the autonomous drone.
- The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
-
FIG. 1 is a contextual diagram of an example system of an integrated security environment for detecting one or more injured individuals at a monitored facility. -
FIG. 2 is a contextual diagram of an example system of a building destruction environment for detecting one or more injured individuals. -
FIG. 3 is a contextual diagram of an example system for training a neural network model for ultrasound analytics. -
FIG. 4 is a flowchart of an example process for providing data corresponding to a detected individual for ultrasound analytics. -
FIG. 5 is a flowchart of an example system for processing data corresponding to a detected individual for ultrasound analytics. -
FIG. 6 is a block diagram of an example integrated security environment for ultrasound analytics that may utilize various security components. -
FIG. 1 is a contextual diagram of anexample system 100 of an integrated security environment for detecting one or more injured individuals at a monitored facility. Thoughsystem 100 is shown and described including a particular set of components in a monitoredproperty 102 includes acontrol unit server 104,network 106,cameras 108,lights 110,sensors 112,home devices 114,security panel 126,drone 130,network 134,remote processing unit 136, and firstresponder system 140, the present disclosure need not be so limited. For instance, in some implementations, only a subset of the aforementioned components may be used by the integrated security environment for monitoring the control unit servers in each monitored property. As an example, there may be asystem 100 that does not use thelights 110. Similarly, there may be implementations that the control unit, such ascontrol unit server 104, is stored in theremote processing unit 136. Yet other alternative systems also fall within the scope of the present disclosure such as asystem 100 that does not use acontrol unit server 104. Rather, these systems would communicate directly with theremote processing unit 136 to perform the monitoring. For these reasons, thesystem 100 should not be viewed as limiting the present disclosure to any particular set of necessary components. - As shown in
FIG. 1 , a residential facility 102 (e.g., home) ofuser 118 is monitored by acontrol unit server 104 that includes components within theresidential facility 102. The components within theresidential facility 102 may include one ormore cameras 108, one ormore lights 110, one ormore sensors 112, one ormore home devices 114, and thesecurity panel 126. The one ormore cameras 110 may include video cameras that are located at the exterior of theresidential facility 102 near thefront door 116, as well as located at the interior of theresidential facility 102 near thefront door 116. The one ormore sensors 112 may include a motion sensor located at the exterior of theresidential facility 102, a front door sensor that is a contact sensor positioned at thefront door 116, and a lock sensor that is positioned at thefront door 116 and each window. The contact sensor may sense whether thefront door 118, the garage door, or the window is in an open position or a closed position. The lock sensor may sense whether thefront door 116 and each window is in an unlocked position or a locked position. The one ormore home devices 114 may include home appliances such as a washing machine, a dryer, a dishwasher, an oven, a stove, a microwave, and a laptop, to name a few examples. Thesecurity panel 126 may receive one or more messages from a correspondingcontrol unit server 104 and aremote processing unit 136. - The
control unit server 104 communicates over a short-range wired or wireless connection overnetwork 106 with connected devices such as each of the one ormore cameras 108, one ormore lights 110, one or more home devices 114 (washing machine, a dryer, a dishwasher, an oven, a stove, a microwave, a laptop, etc.), one ormore sensors 112, thedrone 130, and thesecurity panel 126 to receive sensor data descriptive of events detected by the one ormore cameras 108, the one ormore lights 110, the done 130, and the one ormore home devices 114 in theresidential facility 102. In some implementations, each of the connected devices may connect via Wi-Fi, Bluetooth, or any other protocol used to communicate overnetwork 106 to thecontrol unit server 104. Additionally, thecontrol unit server 104 communicates over a long-range wired or wireless connection with aremote processing unit 136 overnetwork 134 via one or more communication links. In some implementations, theremote processing unit 136 is located remote from theresidential facility 102, and manages the monitoring at theresidential facility 102, as well as other (and, perhaps, many more) monitoring systems located at different properties that are owned by different users. In other implementations, theremote processing unit 136 communicates bi-directionally with thecontrol unit server 104. Specifically, theremote processing unit 136 receives sensor data descriptive of events detected by the sensors included in the monitoring system of theresidential facility 102. Additionally, theremote processing unit 136 transmits instructions to thecontrol unit server 104 for particular events. - In some implementations, a
user 118 may install a device to monitor theremote property 102 from the outside. For instance, theuser 118 may install adrone 130 and acorresponding charging station 142 to monitor the activity occurring outside and inside theresidential property 102. In some implementations, thecontrol unit server 104 may detect when thedrone 130 has departed from the chargingstation 142. Thedrone 130 may automatically depart from the chargingstation 142 at predetermined times set by theuser 118 according to a signature profile. Once departed from the chargingstation 142, thedrone 130 may fly apredetermined path 132 as set by the user according to a profile. Thepredetermined path 132 may be any path around theresidential property 102 as described by the signature profile. The signature profile will be further explained below. - In some implementations, the
drone 130 will have a set ofdevices 131 for providing sensor data to thecontrol unit 104. The set ofdevices 131 may include a camera or an array of cameras, a GPS device, and an ultrasound transducer, to name a few examples. Thedrone 130 may instruct the set ofdevices 131 to record and monitor while thedrone 130 flies thepredetermined path 132. - In the example shown in
FIG. 1 ,user 118 may be in theresidential facility 102 and can arm theresidential facility 102 at any point in time. In doing so, theuser 118 may turn off each of the one ormore lights 110, turn off each of the one ormore home devices 114, lock thefront door 116, and close and lock each of the one or more windows. Theuser 118 may interact with a client device 120 to activate a signature profile, such as “arming home” for theresidential facility 102. Alternatively, theuser 118 may keep the one ormore lights 110 on, keep the one ormore home devices 114 on while setting the “arming home” profile. - In some implementations, the client device 120 may display a web interface, an application, or a device specific application for a smart home system. The client device 120 can be, for example, a desktop computer, a laptop computer, a tablet computer, a wearable computer, a cellular phone, a smart phone, a music player, an e-book reader, a navigation system, a security panel, or any other appropriate computing device. In some implementations, the client device 120 may communicate with the
control unit server 104 over thenetwork 106. Thenetwork 106 may be wired or wireless or a combination of both and can include the Internet. - In some implementations,
user 118 may communicate with the client device 120 to activate a signature profile for theresidential facility 102. To illustrate,user 118 may first instruct thecontrol unit server 104 to set a signature profile associated with arming theresidential facility 102. For example,user 118 may use a voice command to say “Smart Home, arm house,” to the client device 120. The voice command may include a phrase, such as “Smart Home” to trigger the client device 120 to actively listen to a command following the phrase. Additionally, the phrase “Smart Home” may be a predefined user configured term to communicate with the client device 120. The client device 120 can send the voice command to thecontrol unit server 104 over thenetwork 106. - In some implementations, the
control unit server 104 may notify theremote processing unit 136 that theresidential facility 102 is to be armed. In addition, thecontrol unit 104 may set associated parameters in response to receiving the voice command. Moreover, thecontrol unit 104 can send back a confirmation to the client device 120 in response to arming theresidential facility 102 and setting the associated parameters. For example, thecontrol unit server 104 may transmit a response to the client device 120 that reads “Smart Home armed.” - In some implementations, in order for the
control unit server 104 to allowuser 118 and others to set and activate a signature profile case for theresidential facility 102, theuser 118 and others may define and store signature profiles in thecontrol unit server 104. In other implementations, theuser 118 and others may define and store signature profiles in theremote processing unit 136. The signature profile may be associated with each user and allow for various use cases of the devices in theresidential facility 102. Each of the signature profiles can be associated with one user, such asuser 118 oruser 124. For example, auser 118 may create a signature profile for arming theresidential facility 102. In another example, auser 122 may create a signature profile for monitoring theresidential facility 102 with adrone 130 for monitoring theresidential facility 102. - In some implementations,
user 122 may store one or more parameters associated with a use case in his or her signature profile. Specifically, the one or more parameters for each use case may describe a volume level in decibels (dB) of thespeakers 108, an aperture amount for thecameras 110, a brightness intensity level of thelights 112, turning on home devices 117 such as television, laptop, one or more fans, setting a specific temperature of a thermometer, opening or closing the shades of a window a particular amount, alarm settings corresponding to thesecurity panel 126, defining a predetermined path and a length of time for thedrone 130 to monitor theresidential facility 102, and any other parameters to describe the use case. For example,user 122 may create a signature profile with a use case for “arm home”. Theuser 122 may define a volume level of 0 dB for thespeakers 108, an aperture of f/16 for the one ormore cameras 110, zero lumens for the one ormore lights 112, turning off a television, turning off a laptop, turning on fans, setting the thermometer to 67 degrees Fahrenheit, fully closing the blinds of the one or more windows, and setting thesecurity panel 126 to notify theremote processing unit 136 for any detected alarms. - In some implementations, the
user 118 may define apredetermined path 132 for thedrone 130 to monitor around theresidential facility 102. Thepredetermined path 132 may be drawn by theuser 118 through interaction with the smart home application on the client device 120. Theuser 118 may additionally define the height and speed in which thedrone 130 flies around theresidential property 102. For instance, theuser 118 may draw a circle on a map provided by the smart home application on the client device 120, set the altitude to 10 feet, and set thedrone 130′s flying speed to 15 miles per hour. Theuser 118 can define a period of time for the drone to monitor theresidential property 102. For example, theuser 118 may enter the time of 1 hour into the smart home application on the client device 120. Following the time period in which thedrone 130 monitors theresidential property 102, theuser 118 can instruct the drone to return to the chargingstation 142 or to traverse a new predetermined path aroundresidential property 102, different frompredetermined path 132. - In some implementations, the
control unit server 104 sets the parameters for the signature profile when theuser 122 speaks “Smart home, arming the home” to client device 120. Thecontrol unit server 104 saves the parameters in memory defined by theuser 118 in the smart home application on the client device 120 in response to the user setting the parameters. In addition, thecontrol unit server 104 may transmit the set parameters for the signature profile to theremote processing unit 136 to save for backup purposes. - In some implementations, the
control unit server 104 may increase the sensitivity corresponding to each of the one ormore sensors 114 for the “arming the home” use case. Specifically,control unit server 104 may increase the sensitivity for the front door sensor, the garage door sensor, and the lock sensor by a predetermined factor so that smaller movements of the front door or garage door trigger an alarm event. For example, the sensitivity may be increased by a factor of five. - In some implementations, the
control unit server 104 may send a response to display a message on the client device 120 that says “Smart Home, home armed” once thecontrol unit server 104 sets the parameters. Thecontrol unit server 104 may also transmit the same response to thedisplay 128 ofsecurity panel 126 once thecontrol unit server 104 sets the parameters. In addition, thecontrol unit server 104 may transmit a message to theremote processing unit 126 that theresidential facility 102 finished arming. - In some implementations, the
drone 130′s set ofdevices 131 may seek to detect the health of one or more individuals inside the residential facility 101. In particular, the set ofdevices 131 may gather information on the health of the one or more individuals inside theresidential facility 102. As thedrone 130 flies around theresidential facility 102, thedrone 130 scans areas external and internal to theresidential facility 102. In particular, thedrone 130 may scan areas in proximity to theresidential facility 102, scan through the walls of theresidential facility 102 to see the interior of theresidential facility 102, and monitor each level of theresidential facility 102. Thedrone 130 uses local machine learning algorithms along with ultrasound data, images, and GPS locational data captured by the set ofdevices 131 to detect one or more individuals in theresidential facility 102. Should thedrone 130 detect an individual in theresidential facility 102, thedrone 130 may move closer to the individual to perform a more detailed scan. Thedrone 130 then sends the captured data to thecontrol unit server 104 for further processing to determine the health of the one or more individuals. Thecontrol unit server 104 may also acquire sensor data from thecameras 108, thelights 110, thesensors 112, and thehome devices 114 in response to receiving the captured data from thedrone 130. Thecontrol unit server 104 provides the captured data and the sensor data to theremote processing unit 136 for further processing a determination of whether afirst responder system 140 should be contacted. - For example, during stage (A), the
user 118 sets the parameters for the “arming home” signature profile that includes a time for the drone to initiate monitoring theresidential property 102. At the set time as designated by the “arming home” signature profile, thecontrol unit server 104 sends an indication to thedrone 130 vianetwork 106 to initiate monitoring theresidential facility 102. The indication may include GPS coordinates of the predetermined path, the length of time to travel, and the altitude or varying altitude around theresidential facility 102 in which to travel. In some implementations, theremote processing unit 136 may send an indication to thecontrol unit server 104 to instruct thedrone 130 to initiate the monitoring of theresidential facility 102. In response to receiving the indication, thedrone 130 powers on, flies away from the chargingstation 142, and flies thepredetermined path 132 as set in the “arming home” signature profile. During flight, thedrone 130 uses the set ofsensors 131 to detect one or more individuals in theresidential facility 102. - In some implementations, the
control unit server 104 may use thecameras 108, thelights 110, thesensors 112, and thehome devices 114 in conjunction with the set ofsensors 131 to detect one or more individuals in theresidential facility 102. For instance, as thedrone 130 travels around thepredetermined path 132, thedrone 130 may send GPS coordinate updates to thecontrol unit server 104. Thecontrol unit server 104 may turn on one or more of thelights 110 in one or more areas currently being viewed by thedrone 130 to improve detectability. In addition, thecontrol unit server 104 may increase sensitivity of one ormore sensors 112 in the one or more areas currently being viewed by thedrone 130 to also improve detectability. Should a motion detector from the one ormore sensors 112 detect movement in an area of theresidential facility 102, thecontrol unit server 104 can transmit a GPS coordinate of the detected motion sensor to thedrone 130 to focus the set ofdevices 131 on the area designated by the transmitted GPS coordinate. The GPS coordinate may be inside or outside theresidential facility 102. - During stage (B), the
drone 130 detects an individual in theresidential facility 102. For instance, the set ofdevices 131 captures data during thedrone 130′s flight around thepredetermined path 132. The data includes camera images and GPS locational data. Thedrone 130 feeds the camera images and the GPS locational data to a local processing engine included in thedrone 130′s memory. The local processing engine produces an indication that an individual has been detected in the camera images. In response to determining that an individual, such asuser 118, has been detected, thedrone 130 moves closer to that individual to perform an ultrasound scan. Thedrone 130 may move closer to a window of theresidential facility 102 or closer to a wall of theresidential facility 102 to perform the ultrasound scan. Thedrone 130 may perform an ultrasound scan of theuser 118 at different portions of theuser 118′s body. For instance, thedrone 130 may initiatescanning user 118′s head, then move to scan theuser 118′s shoulder, and down touser 118′s feet. These ultrasound scans will be used later in constructing a mapped environment of theuser 118. - During stage (C), the
drone 130 detects another individual, such asuser 124, in theresidential facility 102. Thedrone 130 performs similar steps as described in stage (B) to detectuser 124. In some implementations, the local processing engine in thedrone 130 produces an indication of a detected person. In other implementations, the local processing engine in thedrone 130 may produce a recognition of a detected person. For instance, based on the training of the local processing engine, the local processing engine may produce an indication that a person has been detected or that the person detected isuser 124 or Bob. This indication will be further described below. - During stage (D), the
drone 130 provides the captureddrone data 133 to thecontrol unit server 104 over thenetwork 106. The captureddrone data 133 includes the captured images, the GPS locational data, and the indication provided by the local processing engine. Thecontrol unit server 104 receives the captureddrone data 133. Thecontrol unit server 104 combines the captureddrone data 133 with data provided by the one ormore cameras 108, the one ormore lights 110, and the one ormore sensors 112. For instance, thecontrol unit server 104 may package together the captureddrone data 133 with images and video from thecameras 108, a brightness level from the one ormore lights 110, and motion or contact data from the one ormore sensors 112 when a detection was made by thedrone 130. In addition, thecontrol unit server 104 may include the data changes indicating the brightness level of the one ormore lights 110 and the sensitivity changes of the one ormore sensors 112 to improve detectability for thedrone 130. This change data may facilitate theremote processing unit 136 in determining typical paths of one or more individuals in theresidential facility 102. This can be used to update thepredetermined path 132 of thedrone 130 for improved tracking of individuals. Once thecontrol unit server 104 packages the data, thecontrol unit server 104 transmits the packaged data assensor data 135 to theremote processing unit 136. - During stage (E), the
remote processing unit 136 receives thesensor data 135. Theremote processing unit 136 includes a remote processing engine to produce an indication of the health of the individual detected in the captured image. For instance, the remote processing engine of theremote processing unit 136 includes one or more machine learning algorithms that can produce an indication of an injury of the individual from thesensor data 135. The injuries may include one or more broken bones, external bleeding, and burn marks, to name a few examples. The indication output by theremote processing unit 136 may include an image from the ultrasound data including the detected individual and a tagged description of the injury. The remote processing engine provides the image and the tagged description of the injury to a severity indicator. The severity indicator tags the input with a number indicating the severity of the individual's health in the attached image. For example, as illustrated inFIG. 1 , thecontrol unit server 104 may providesensor data 135 of two detected individuals inresidential facility 102,user 118 anduser 124. The remote processing engine of theremote processing unit 136 may produce a severity indication of zero, corresponding to one or more images from the ultrasound data ofuser 118. The severity indication of zero indicates thatuser 118 has no injury or appears to have no injury. Likewise, the remote processing engine may produce a severity indication of ten, corresponding to one or more images from the ultrasound data ofuser 124, indicating a severe injury. The remote processing engine may detect thatuser 124 has broken his arm, as illustrated by the images in the ultrasound data. - During stage (F), the remote processing engine provides a notification to the owner of the
residential facility 102. The notification includes one or more images and the corresponding severity of an injury of an identified individual in each of the one or more images. In some implementations, the remote processing engine in theremote processing unit 136 provides the notification to the client device 120 ofuser 118. The client device 120 may display the one or more images and the corresponding severity of the injury of the identified individual in each of the one or more images to theuser 118. For example, the severity of the injury may include a number such as ten or display a message that recites “User Broke Arm” 122, as illustrated inFIG. 1 . Theuser 118 may proceed to locate the injured individual,user 124, to provide emergency assistance. - During stage (G), the remote processing engine provides a notification to a
first responder system 140. The notification includes a reconstructed mapped environment of the images of the ultrasound scans and a corresponding severity indicator for each of the images. As mentioned above, the reconstructed mapped environment may include an image converted from ultrasound ofuser 118′s head,user 118′s shoulders,user 118′s chest, and the remaining body sections down touser 118′s feet. Each of these ultrasound images reconstructed in the mapped environment may include a severity indicator. For instance, foruser 124 that broke his arm, the severity indicator corresponding to the head ofuser 124 may be zero, the severity indicator corresponding to the shoulder ofuser 124 may be one, the severity indicator corresponding to the arms ofuser 124 may be ten, and the severity indicator corresponding to the legs ofuser 124 may be two. This reconstructed mapped environment is provided to thefirst responder system 140 to facilitate determining an injury of the user, such asuser 124. In some implementations, thefirst responder system 140 may be police officers, firefighters, paramedics, and emergency medical technicians, to name a few examples. -
FIG. 2 is a contextual diagram of an example system of abuilding destruction environment 200 for detecting one or more injured individuals. Thebuilding destruction environment 200 includes a demolishedbuilding 202 as a result of a natural disaster, such as an earthquake. The demolishedbuilding 202 includes one or more trapped individuals that may have life threatening injuries. For instance, the demolishedbuilding 202 includesuser 204 lying down on the second floor of the demolishedbuilding 202 anduser 206 lying under the rubble at the bottom of the demolishedbuilding 202. In some implementations, a first responder, such as a firefighter or a police officer, may letdrone 208 fly around a path 210 around the demolished building 202 to find the one or more trapped individuals to detect their health status. -
FIG. 2 is similar toFIG. 1 without the inclusion of acontrol unit server 104 and one or more sensors at the demolishedbuilding 202. The only data provided to theremote processing unit 226 includes data retrieved from thedrone 208 itself. In addition, thedrone 208 can scan along path 210 until retrieved by a first responder via a client device. - During stage (A′), which is similar to stage (A) of
FIG. 1 , thedrone 208 flies a path 210 to find one or more individuals trapped in the demolishedbuilding 202. In some implementations, the path 210 may be preprogrammed by the first responder located at the scene of thebuilding destruction environment 200. In other implementations, the path 210 may be a random path taken by thedrone 208 around the demolishedbuilding 202. Thedrone 208 may fly the path 210 until a first responder retrieves thedrone 208. In some implementations, thedrone 208 may fly the path 210 until the first responder orfirst responder system 230 receives an indication from theremote processing unit 226 indicating a location of the one or more individuals in the demolishedbuilding 202 and a corresponding health status of the located one or more individuals. - During stages (B′) and (C′), which are similar to stages (B) and (C) of
FIG. 1 , thedrone 208 detectsuser 204 anduser 206 in the demolishedbuilding 202, as illustrated by the arrows of detectedperson 212. Initially, thedrone 208 utilizes the camera and GPS device from the set of sensors onboard thedrone 208 to detectuser building 202. Thedrone 208 utilizes a local processing engine that uses one or more machine learning algorithms to detect individuals from the captured images. Once the local processing engine identifies one or more individuals in the captured images, the local processing engine tags the individuals in the image with GPS locational data from the GPS device. The GPS locational data describes the locational position of the detected individual. For instance, thedrone 208 calculates the locational position of the detected individual using the GPS locational position of thedrone 208, the altitude of thedrone 208, and an estimated distance between thedrone 208 and the detected individual using slope estimation. - During stage (D′), the
drone 208 moves closer to a detected individual to perform an ultrasound scan. In order to ensure a high quality ultrasound results, thedrone 208 may be programmed to move as close as possible to the detected individual, such asuser 206 collapsed under the rubble. Thedrone 208 may perform a full body ultrasound scan to capture all features ofuser 206. In some implementations, one or more portions ofuser 206′s body may be covered by rubble. Thedrone 208 may only perform scans on the exposed portion ofuser 206′s body. Following the ultrasound scans of theuser 206′s body, thedrone 208 may move to the next detected individual, such asuser 204, to perform the ultrasound scan onuser 204. In some implementations, thedrone 208 may receive an audible sound coming from theuser 204 while performing the ultrasound scan. If thedrone 208 determines the audible sound is greater than a threshold level, such as theuser 204 is screaming or moaning in pain, thedrone 208 can include an emergency request of theuser 204 in danger in the data to provide to theremote processing unit 226. In addition, thedrone 208 can initiate communication with afirst responder system 230 if thedrone 208 determines theuser 204 is in severe danger based on the audible sound being greater than the threshold level. Alternatively, thedrone 208 can provide an indication to theuser 204 to keep calm. For instance, thedrone 208 can play a calming song or thedrone 208 can play an audible message to theuser 204 that recites “Please remain calm, help is on the way.” Thedrone 208 may recite other messages to theuser 204. Alternatively, thedrone 208 may cease performing ultrasound scan if thedrone 208 determines theuser 204 is scared. Afterwards, thedrone 208 may return to the path 210 to find any other individuals in the demolishedbuilding 202. - During stage (E′), the
drone 208 transmits data to theremote processing unit 226. The data includes detected person data 216,ultrasound data 218,location data 220, and detectedimage data 222. The detected person data 216 includes information corresponding to the number of individuals detected during thedrone 208′s scan on path 210. For example, the detected person data 216 may indicate that two individuals,user 204 anduser 206, were detected in the demolishedbuilding 202. Theultrasound data 218 may include the ultrasound scans of the exposed body portions ofuser 204 anduser 206. Thelocation data 220 may include the GPS locational data ofuser 204 anduser 206. The detectedimage data 222 may include the images from thedrone 208′s camera that include the detected individuals and non-detected images. In some implementations, the images may include a tag indicating whether an individual is detected or not detected in that image. - During stage (F′), which is similar to stage (E) of
FIG. 1 , the remote processing engine in theremote processing unit 226 processes the detected person data 216, theultrasound data 218, thelocation data 220, and the detectedimage data 222 to produce an indication of the health of the one or more detected individuals. - During stage (G′), which is similar to stage (G) of
FIG. 1 , the remote processing engine provides anotification 228 to thefirst responder system 230. As mentioned earlier, the notification includes a reconstructed mapped environment of the images of the ultrasound scans and a corresponding severity indicator for each of the images. - In another exemplary use case, a drone, such as
drone 208, can fly a particular path around a vehicular accident to locate one or more individuals trapped in the vehicles. Thedrone 208 may or may not be programmed with a predetermined path 210 by a first responder. In particular, thedrone 208 can be programmed to monitor an area that includes the vehicular accident. For example, thedrone 208 can fly above the vehicular accident, near the windows of the vehicles involved in the accident, and low to the ground to search underneath the vehicles to determine whether an individual has been trapped underneath the vehicle. Thedrone 208 can perform steps similar to that ofFIG. 1 andFIG. 2 to notify first responders if one or more injured individuals are found. - In another exemplary use case,
drone 208 can fly a particular path around a search and rescue area in a forest to locate one or more lost individuals. Thedrone 208 may or may not be programmed with a predetermined path 210 by a first responder to fly through the forest searching for the lost individuals. If thedrone 208 detects a lost individual, thedrone 208 can perform steps similar to that ofFIG. 1 andFIG. 2 to notify first responders and determine if the detected individual is injured. -
FIG. 3 is a contextual diagram of an example system 300 for training a neural network model for ultrasound analytics. The system 300 can train other types of machine learning models for ultrasound analytics, such as one or clustering models, one or more deep learning models, Bayesian learning models, or any other type of model. Briefly, and as described in more detail below, the system 300 illustrates the application of a neural network model in the local processing engine of thedrone 130 and the application of a neural network model in the remote processing engine of theremote processing unit 136. In some implementations, the data provided as input to the model in the local processing engine comes from the set ofsensors 131 mounted on thedrone 314. In some implementations, the data provided as input to the model in the remote processing engine comes from an output of analyzing the sensor data processed by the local processing engine. - In some implementations, the local processing engine in the
drone 314 trains a neural network model while thedrone 314 is offline. The neural network model may include an input layer, an output layer, and one or more hidden layers. The local processing engine may use a machine learning technique to continuously train the neural network model. The local processing engine trains its neural network model using one or more training techniques. For instance, the local processing engine may train the neural network model using images that include zero or more individuals and a tag as to whether or not an individual exists in the image. The local processing engine applies the neural network model once sufficiently trained. - In some implementations, the local processing engine in the
drone 314 applies images captured from the camera mounted on thedrone 130 to the trainedmodel 304. Thedrone 314 sequentially inputs eachimage 302A-302N to the trainedmodel 304 at a predetermined time interval. For instance, the predetermined time interval may be the length of time it takes for the trainedmodel 304 to process one image 302C. In another instance, the predetermined time interval may be spaced by a time, such as 2 seconds. - In some implementations, the trained
model 304 produces an output for each image input to the trainedmodel 304. The output of the trainedmodel 304 includes a detection ornon-detection 306 and theinput image 302N. The detection ornon-detection 306 includes an indication of whether a person is detected in theimage 302N. If a person is not detected in an image, such asimage 302N, the local processing engine tags the image as no individual detected. Alternatively, if the local processing engine indicates a detection in 306, theimage 302N is provided as input to the location detection 310. In the location detection 310, the local processing engine calculates the locational position of the detected individual using the GPS location position of thedrone 314, the altitude of thedrone 314, and an estimated distance between thedrone 314 and the detected individual using slope estimation. Theimage 302N is tagged with the locational position of the detected individual. - In some implementations, the local processing engine instructs the
drone 314 to perform an ultrasound scan at the locational position of the detected individual, such asuser 316, based on the determination that theimage 302N includesuser 316. Thedrone 314 moves in proximity to the location of theuser 316 and performs ultrasound scans of theuser 316 over different portions of theuser 316′s body. For instance, thedrone 314 may initiatescanning user 316′s head, then move to scan theuser 316′s shoulders, and down touser 316′s feet to capture all features ofuser 316. This ensures all parts ofuser 316 can be checked for a health status. - After performing the ultrasound scans, the
drone 314 provides the captured data to a remote processing unit 324. As mentioned earlier inFIG. 2 , thedrone 314 provides the detectedperson data 318, theultrasound data 320, thelocation data 322, and the detectedimage data 308 to the remote processing unit 324. In some implementations, thedrone 314 provides a new set of detectedperson data 318,ultrasound data 320,location data 322, and detectedimage data 308 each time a new ultrasound scan is performed on a newly detected individual. In other implementations, thedrone 314 provides a new set of data each time thedrone 314 comes in contact with the chargingstation 142. As transmission of data to thecontrol unit server 104 or the remote processing unit 324 draws battery usage that may be used for other purposes, such as flying or providing power to the set ofdevice 131 mounted on-board thedrone 314, thedrone 314 may be configured to only transmit data when connected to the chargingstation 142 to preserve battery life when monitoring theresidential facility 102. - In some implementations, the remote processing unit 324 receives the detected
person data 318, theultrasound data 320, thelocation data 322, and the detectedimage data 308. The remote processing engine in the remote processing unit 324 processes each of the received data pieces. Initially, the remote processing engine provides theultrasound data 320 to areconstruction mechanism 328. First, thereconstruction mechanism 328 converts each scan of ultrasound into animage 329. For example, if thedrone 314 performs ten ultrasound scans onuser 316, then thereconstruction mechanism 316 converts the ten ultrasound scans to ten corresponding images. - In some implementations, the remote processing engine provides each
image 329 converted from an ultrasound scan to a trainedneural network model 330. The trainedmodel 330 is similar to trainedmodel 304. In particular, the trainedmodel 330 may include an input layer, an output layer, and one or more hidden layers. The remote processing engine may use a machine learning technique to continuously train the neural network model to create the trainedmodel 330. The remote processing engine applies the trainedmodel 330 once sufficiently trained. - In some implementations, the remote processing engine in the remote processing unit 324 applies
images 329 of the ultrasound data and the detectedperson data 318 to the trainedmodel 330. The trainedmodel 330 is trained to produce an indication 331 of the health of the individual detected in the image from the captured ultrasound. For example, the health of the individual 316 may include indicating whether the individual has sustained one or more broken bones, any external bleeding, or burn marks, to name a few examples. The remote processing engine may tag theinput image 329 with the indication 331. - In some implementations, the remote processing engine may provide the tagged
input image 329 with the indication 331 output from the trainedmodel 330 to a severity indicator mechanism 332. The severity indicator mechanism 332 analyzes the tagged description 331 to determine aseverity indicator 333 of the individual in theimage 329. For instance, theseverity indicator 333 indicates a number that indicates the severity of the individual's health according to the tagged description. For instance, if the tagged description indicated “external bleeding,” the severity indicator mechanism 332 may provide a severity indication of ten. In another instance, if the tagged description indicated “broken arm,” the severity indicator mechanism 332 may provide a severity indication of seven. This is because an external bleeding symptom may be more severe than a broken arm, depending on the severity of the external bleeding. - In some implementations, the severity indicator mechanism 332 reconstructs a mapped
environment 334 using the images converted from the ultrasound scans and the corresponding severity indicator for each of the images. For example, the severity indicator mechanism 332 reconstructs the mapped environment of the images of the ultrasound scan performed onuser 316. The reconstructed mappedenvironment 334 may include an image converted from ultrasound ofuser 316′s head,user 316′s shoulders,user 316′s chest, and the remaining body sections down touser 316′s feet. Each of these images reconstructed in the mapped environment may include aseverity indicator 333. For instance, foruser 316 who may have a broken leg, the severity indicator mechanism 332 may designate a severity indicator of zero to the head ofuser 316, a severity indicator of one corresponding to the shoulder ofuser 316, a severity indicator of zero corresponding to the arms ofuser 316, and a severity indicator of ten corresponding to the legs ofuser 316. The remote processing engine provides thereconstructed map 334 to thefirst responder system 335 to facilitate in determining an injury of an identified user. - In some implementations, the
first responder system 335 can further train the trainedmodel 330. For instance, after thefirst responder system 335 receives the reconstructedmap 334, an individual, such as a medic, of thefirst responder system 335 may determine that theuser 316 does not in fact have a broken leg, as determined by the trainedmodel 330. In response, the medic of thefirst responder system 335 can update one or more medical reports that the trainedmodel 330 accesses to generate a reconstructed mappedenvironment 334 to reflect a change to the medical diagnosis of the leg ofuser 316. - In some implementations, the
first responder system 335 may store the medical reports and transfer the medical records to theremote processing unit 226. The remote processing engine may access the medical records for retraining the trainedmodel 330. For instance, rather than the medical diagnosis indicating the leg ofuser 316 as being broken, the medical diagnosis in the medial reports indicates that theuser 316′s leg is healthy. The trainedmodel 330 can access the received updated reports and thecorresponding image 329 used in the reconstructed mappedenvironment 334 to retrain the trainedmodel 330 to identify that the leg ofuser 316 in theimage 329 is not broken. The trainedmodel 330 can be retrained with other medical diagnosis updates foruser 316 and other users. -
FIG. 4 is a flowchart of anexample process 400 for providing data corresponding to a detected individual for ultrasound analytics. Generally, theprocess 400 includes determining an indication of an individual in a frame of image data; determining a location of the identified individual in the frame of data using locational coordinates; obtaining ultrasound data of the identified individual in response to a drone's movement in proximity to the location of the identified individual to capture the ultrasound data; and, providing the identification of the individual, the location of the identified individual, the frame of image data, and the ultrasound data of the identified individual to a remote processing unit. - During 402, the
drone 130 determines an identification of an individual in a frame of image data. Thedrone 130′s set ofdevices 131 captures data during thedrone 130′s flight around thepredetermined path 132. The data includes camera images and GPS locational data. Thedrone 130 feeds the camera images and the GPS locational data to a local processing engine included in thedrone 130′s memory. The local processing engine produces an indication that an individual has been detected in the camera images. In particular, the local processing engine in the drone 340 applies images captured from the camera mounted on thedrone 130 to a trainedneural network model 304. The trainedneural network model 304 produces an output for each image that indicates a detection of a person or a non-detection of a person in the image. - During 404, the local processing engine determines a location of the identified individual in the frame of data using locational coordinates. In some implementations, the local processing engine calculates the locational position of the detected individual using the GPS location position of the
drone 314, the altitude of thedrone 314, and an estimated distance between thedrone 314 and the detected individual using slope estimation. Theimage 302N is tagged with the locational position of the detected individual. - During 406, the local processing engine obtains ultrasound data of the identified individual in response to
drone 130′s movement in proximity to the location of the identified individual to capture the ultrasound data. In some implementations, the local processing engine instructs thedrone 314 to perform an ultrasound scan at the locational position of the detected individual, such asuser 316, based on the determination theimage 302N detects theuser 316. Thedrone 314 moves in proximity to the position of theuser 316 and performs ultrasound scans of theuser 316 over different portions of theuser 316′s body. For instance, thedrone 314 may initiatescanning user 316′s head, then move to scan theuser 316′s shoulders, and proceed down touser 316′s feet to capture all features ofuser 316. This ensures all parts ofuser 316 can be checked for a health status. - During 408, the local processing engine provides the identification of the individual, the location of the identified individual, the frame of image data, and the ultrasound data of the identified individual to a remote processing unit. In some implementations, the
drone 314 transmits the detectedperson data 318, theultrasound data 320, thelocation data 322, and the detectedimage data 308 to the remote processing unit 324. In some implementations, thedrone 314 provides a new set of detectedperson data 318,ultrasound data 320,location data 322, and detectedimage data 308 each time a new ultrasound scan is performed on a newly detected individual. The detectedperson data 318 includes information corresponding to the number of individuals detected during thedrone 314′s scan on path. Thelocation data 322 may include the GPS locational data ofuser 316. The detectedimage data 308 may include the images from thedrone 314′s camera that include the detected individuals and non-detected images. In some implementations, the images may include a tag indicating whether an individual is detected or not detected in that image. -
FIG. 5 is a flowchart of an example 500 for processing data corresponding to a detected individual for ultrasound analytics. Generally, theprocess 500 includes obtaining an identification of an individual, a location of the identified individual, a frame of image data, and ultrasound data of the identified individual from a drone; generate an ultrasound image from obtained ultrasound data; determine whether the ultrasound image includes the identified individual as having an injury; generate a severity indicator corresponding to each of the ultrasound images; generate a mapped environment that includes the ultrasound images stitched together that includes the corresponding severity indicator for each of the ultrasound images; and, providing the mapped environment to a first responder system. - During 502, the remote processing engine obtains an identification of an individual, a location of the identified individual, a frame of image data, and ultrasound data of the identified individual from a
drone 130. In some implementations, the remote processing unit 324 receives the detectedperson data 318, theultrasound data 320, thelocation data 322, and the detectedimage data 308. The remote processing engine in the remote processing unit 324 processes each of the received data items. - During 504, the remote processing engine generates an ultrasound image from the obtained ultrasound data. In some implementations, the remote processing engine provides the
ultrasound data 320 to areconstruction mechanism 328. First, thereconstruction mechanism 328 may convert each scan of ultrasound into animage 329. For example, if thedrone 314 performs ten ultrasound scans onuser 316, then thereconstruction mechanism 316 converts the ten ultrasound scans to ten corresponding images. - During 506, the remote processing engine determines whether the ultrasound image includes the identified individual as having an injury. In some implementations, the remote processing engine provides each image converted from an ultrasound scan to a trained
neural network model 330. The trainedmodel 330 is trained to produce an indication 331 of the health of the individual detected in the image from the captured ultrasound. For example, the health of the individual 316 may include an indication of whether the individual has sustained one or more broken bones, any external bleeding, or burn marks, to name a few examples. The remote processing engine may tag theinput image 329 with the indication 331. - During 508, the remote processing engine generates a severity indicator corresponding to each of the ultrasound images. In some implementations, the remote processing engine may provide the tagged
input image 329 with the indication 331 output from the trainedmodel 330 to a severity indicator mechanism 332. The severity indicator mechanism 332 analyzes the tagged description 331 to determine aseverity indicator 333 of the individual in theimage 329. For instance, theseverity indicator 333 indicates a number that indicates the severity of the individual's health according to the tagged description. For instance, if the tagged description indicated “external bleeding,” the severity indicator mechanism 332 may provide a severity indication of ten. In another instance, if the tagged description indicated “broken arm,” the severity indicator mechanism 332 may provide a severity indication of seven. This is because an external bleeding symptom may be more severe than a broken arm, depending on the severity of the external bleeding. - During 510, the remote processing engine generates a mapped environment that includes the ultrasound images stitched together that includes the corresponding severity indicator for each of the ultrasound images. In some implementations, the severity indicator mechanism 332 reconstructs a mapped
environment 334 using the images converted from the ultrasound scans and the corresponding severity indicator for each of the images. For example, the severity indicator mechanism 332 reconstructs the mapped environment of the images of the ultrasound scan performed onuser 316. The reconstructed mappedenvironment 334 may include an image converted from ultrasound ofuser 316′s head,user 316′s shoulders,user 316′s chest, and the remaining body sections down touser 316′s feet. Each of these images reconstructed in the mapped environment may include aseverity indicator 333. For instance, foruser 316 who may have a broken leg, the severity indicator mechanism 332 may designate a severity indicator of zero to the head ofuser 316, a severity indicator of one corresponding to the shoulder ofuser 316, a severity indicator of zero corresponding to the arms ofuser 316, and a severity indicator of ten corresponding to the legs ofuser 316. - During 512, the remote processing engine provides the mapped environment to a first responder system. In some implementations, providing the reconstructed mapped
environment 334 to thefirst responder system 335 facilitates in determining an injury of an identified user. -
FIG. 6 is a block diagram of an exampleintegrated security environment 600 for ultrasound analytics that may utilize various components. Theelectronic system 600 includes anetwork 605, acontrol unit 610, one ormore user devices monitoring application server 660, and a centralalarm station server 670. In some examples, thenetwork 605 facilitates communications between thecontrol unit 610, the one ormore user devices monitoring application server 660, and the centralalarm station server 670. - The
network 605 is configured to enable exchange of electronic communications between devices connected to thenetwork 605. For example, thenetwork 605 may be configured to enable exchange of electronic communications between thecontrol unit 610, the one ormore user devices monitoring application server 660, and the centralalarm station server 670. Thenetwork 605 may include, for example, one or more of the Internet, Wide Area Networks (WANs), Local Area Networks (LANs), analog or digital wired and wireless telephone networks (e.g., a public switched telephone network (PSTN), Integrated Services Digital Network (ISDN), a cellular network, and Digital Subscriber Line (DSL)), radio, television, cable, satellite, or any other delivery or tunneling mechanism for carrying data.Network 605 may include multiple networks or subnetworks, each of which may include, for example, a wired or wireless data pathway. Thenetwork 605 may include a circuit-switched network, a packet-switched data network, or any other network able to carry electronic communications (e.g., data or voice communications). For example, thenetwork 605 may include networks based on the Internet protocol (IP), asynchronous transfer mode (ATM), the PSTN, packet-switched networks based on IP, X.25, or Frame Relay, or other comparable technologies and may support voice using, for example, VoIP, or other comparable protocols used for voice communications. Thenetwork 605 may include one or more networks that include wireless data channels and wireless voice channels. Thenetwork 605 may be a wireless network, a broadband network, or a combination of networks including a wireless network and a broadband network. - The
control unit 610 includes acontroller 612 and anetwork module 614. Thecontroller 612 is configured to control a control unit monitoring system (e.g., a control unit system) that includes thecontrol unit 610. In some examples, thecontroller 612 may include a processor or other control circuitry configured to execute instructions of a program that controls operation of a control unit system. In these examples, thecontroller 612 may be configured to receive input from sensors, flow meters, or other devices included in the control unit system and control operations of devices included in the household (e.g., speakers, lights, doors, etc.). For example, thecontroller 612 may be configured to control operation of thenetwork module 614 included in thecontrol unit 610. - The
network module 614 is a communication device configured to exchange communications over thenetwork 605. Thenetwork module 614 may be a wireless communication module configured to exchange wireless communications over thenetwork 605. For example, thenetwork module 614 may be a wireless communication device configured to exchange communications over a wireless data channel and a wireless voice channel. In this example, thenetwork module 614 may transmit alarm data over a wireless data channel and establish a two-way voice communication session over a wireless voice channel. The wireless communication device may include one or more of a LTE module, a GSM module, a radio modem, cellular transmission module, or any type of module configured to exchange communications in one of the following formats: LTE, GSM or GPRS, CDMA, EDGE or EGPRS, EV-DO or EVDO, UMTS, or IP. - The
network module 614 also may be a wired communication module configured to exchange communications over thenetwork 605 using a wired connection. For instance, thenetwork module 614 may be a modem, a network interface card, or another type of network interface device. Thenetwork module 614 may be an Ethernet network card configured to enable thecontrol unit 610 to communicate over a local area network and/or the Internet. Thenetwork module 614 also may be a voiceband modem configured to enable the alarm panel to communicate over the telephone lines of Plain Old Telephone Systems (POTS). - The control unit system that includes the
control unit 610 includes one or more sensors. For example, the monitoring system may includemultiple sensors 620. Thesensors 620 may include a lock sensor, a contact sensor, a motion sensor, or any other type of sensor included in a control unit system. Thesensors 620 also may include an environmental sensor, such as a temperature sensor, a water sensor, a rain sensor, a wind sensor, a light sensor, a smoke detector, a carbon monoxide detector, an air quality sensor, etc. Thesensors 620 further may include a health monitoring sensor, such as a prescription bottle sensor that monitors taking of prescriptions, a blood pressure sensor, a blood sugar sensor, a bed mat configured to sense presence of liquid (e.g., bodily fluids) on the bed mat, etc. In some examples, thesensors 620 may include a radio-frequency identification (RFID) sensor that identifies a particular article that includes a pre-assigned RFID tag. - The
control unit 610 communicates with themodule 622 and thecamera 630 to perform monitoring. Themodule 622 is connected to one or more devices that enable home automation control. For instance, themodule 622 may be connected to one or more lighting systems and may be configured to control operation of the one or more lighting systems. Also, themodule 622 may be connected to one or more electronic locks at the property and may be configured to control operation of the one or more electronic locks (e.g., control Z-Wave locks using wireless communications in the Z-Wave protocol. Further, themodule 622 may be connected to one or more appliances at the property and may be configured to control operation of the one or more appliances. Themodule 622 may include multiple modules that are each specific to the type of device being controlled in an automated manner. Themodule 622 may control the one or more devices based on commands received from thecontrol unit 610. For instance, themodule 622 may cause a lighting system to illuminate an area to provide a better image of the area when captured by acamera 630. - The
camera 630 may be a video/photographic camera or other type of optical sensing device configured to capture images. For instance, thecamera 630 may be configured to capture images of an area within a building or within aresidential facility 102 monitored by thecontrol unit 610. Thecamera 630 may be configured to capture single, static images of the area and also video images of the area in which multiple images of the area are captured at a relatively high frequency (e.g., thirty images per second). Thecamera 630 may be controlled based on commands received from thecontrol unit 610. - The
camera 630 may be triggered by several different types of techniques. For instance, a Passive Infra-Red (PIR) motion sensor may be built into thecamera 630 and used to trigger thecamera 630 to capture one or more images when motion is detected. Thecamera 630 also may include a microwave motion sensor built into the camera and used to trigger thecamera 630 to capture one or more images when motion is detected. Thecamera 630 may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., thesensors 620, PIR, door/window, etc.) detect motion or other events. In some implementations, thecamera 630 receives a command to capture an image when external devices detect motion or another potential alarm event. Thecamera 630 may receive the command from thecontroller 612 or directly from one of thesensors 620. - In some examples, the
camera 630 triggers integrated or external illuminators (e.g., Infra-Red, Z-wave controlled “white” lights, lights controlled by themodule 622, etc.) to improve image quality when the scene is dark. An integrated or separate light sensor may be used to determine if illumination is desired and may result in increased image quality. - The
camera 630 may be programmed with any combination of time/day schedules, system “arming state”, or other variables to determine whether images should be captured or not when triggers occur. Thecamera 630 may enter a low-power mode when not capturing images. In this case, thecamera 630 may wake periodically to check for inbound messages from thecontroller 612. Thecamera 630 may be powered by internal, replaceable batteries if located remotely from thecontrol unit 610. Thecamera 630 may employ a small solar cell to recharge the battery when light is available. Alternatively, thecamera 630 may be powered by the controller's 612 power supply if thecamera 630 is co-located with thecontroller 612. - In some implementations, the
camera 630 communicates directly with themonitoring application server 660 over the Internet. In these implementations, image data captured by thecamera 630 does not pass through thecontrol unit 610 and thecamera 630 receives commands related to operation from themonitoring application server 660. - The
system 600 also includesthermostat 634 to perform dynamic environmental control at the property. Thethermostat 634 is configured to monitor temperature and/or energy consumption of an HVAC system associated with thethermostat 634, and is further configured to provide control of environmental (e.g., temperature) settings. In some implementations, thethermostat 634 can additionally or alternatively receive data relating to activity at a property and/or environmental data at a property, e.g., at various locations indoors and outdoors at the property. Thethermostat 634 can directly measure energy consumption of the HVAC system associated with the thermostat, or can estimate energy consumption of the HVAC system associated with thethermostat 634, for example, based on detected usage of one or more components of the HVAC system associated with thethermostat 634. Thethermostat 634 can communicate temperature and/or energy monitoring information to or from thecontrol unit 610 and can control the environmental (e.g., temperature) settings based on commands received from thecontrol unit 610. - In some implementations, the
thermostat 634 is a dynamically programmable thermostat and can be integrated with thecontrol unit 610. For example, the dynamicallyprogrammable thermostat 634 can include thecontrol unit 610, e.g., as an internal component to the dynamicallyprogrammable thermostat 634. In addition, thecontrol unit 610 can be a gateway device that communicates with the dynamicallyprogrammable thermostat 634. - A
module 637 is connected to one or more components of an HVAC system associated with a property, and is configured to control operation of the one or more components of the HVAC system. In some implementations, themodule 637 is also configured to monitor energy consumption of the HVAC system components, for example, by directly measuring the energy consumption of the HVAC system components or by estimating the energy usage of the one or more HVAC system components based on detecting usage of components of the HVAC system. Themodule 637 can communicate energy monitoring information and the state of the HVAC system components to thethermostat 634 and can control the one or more components of the HVAC system based on commands received from thethermostat 634. - In some examples, the
system 600 further includes one or more robotic devices. The robotic devices may be any type of robots that are capable of moving and taking actions that assist in security monitoring. For example, the robotic devices may include drones that are capable of moving throughout a property based on automated control technology and/or user input control provided by a user. In this example, the drones may be able to fly, roll, walk, or otherwise move about the property. The drones may include helicopter type devices (e.g., quad copters), rolling helicopter type devices (e.g., roller copter devices that can fly and also roll along the ground, walls, or ceiling) and land vehicle type devices (e.g., automated cars that drive around a property). In some cases, the robotic devices may be robotic devices that are intended for other purposes and merely associated with thesystem 600 for use in appropriate circumstances. For instance, a robotic vacuum cleaner device may be associated with themonitoring system 600 as one of the robotic devices and may be controlled to take action responsive to monitoring system events. - In some examples, the robotic devices automatically navigate within a property. In these examples, the robotic devices include sensors and control processors that guide movement of the robotic devices within the property. For instance, the robotic devices may navigate within the property using one or more cameras, one or more proximity sensors, one or more gyroscopes, one or more accelerometers, one or more magnetometers, a global positioning system (GPS) unit, an altimeter, one or more sonar or laser sensors, and/or any other types of sensors that aid in navigation about a space. The robotic devices may include control processors that process output from the various sensors and control the robotic devices to move along a path that reaches the desired destination and avoids obstacles. In this regard, the control processors detect walls or other obstacles in the property and guide movement of the robotic devices in a manner that avoids the walls and other obstacles.
- In addition, the robotic devices may store data that describes attributes of the property. For instance, the robotic devices may store a floorplan and/or a three-dimensional model of the property that enables the robotic devices to navigate the property. During initial configuration, the robotic devices may receive the data describing attributes of the property, determine a frame of reference to the data (e.g., a home or reference location in the property), and navigate the property based on the frame of reference and the data describing attributes of the property. Further, initial configuration of the robotic devices also may include learning of one or more navigation patterns in which a user provides input to control the robotic devices to perform a specific navigation action (e.g., fly to an upstairs bedroom and spin around while capturing video and then return to a home charging base). In this regard, the robotic devices may learn and store the navigation patterns such that the robotic devices may automatically repeat the specific navigation actions upon a later request.
- In some examples, the robotic devices may include data capture and recording devices. In these examples, the robotic devices may include one or more cameras, one or more motion sensors, one or more microphones, one or more biometric data collection tools, one or more temperature sensors, one or more humidity sensors, one or more air flow sensors, and/or any other types of sensors that may be useful in capturing monitoring data related to the property and users in the property. The one or more biometric data collection tools may be configured to collect biometric samples of a person in the home with or without contact of the person. For instance, the biometric data collection tools may include a fingerprint scanner, a hair sample collection tool, a skin cell collection tool, and/or any other tool that allows the robotic devices to take and store a biometric sample that can be used to identify the person (e.g., a biometric sample with DNA that can be used for DNA testing).
- In some implementations, the robotic devices may include output devices. In these implementations, the robotic devices may include one or more displays, one or more speakers, and/or any type of output devices that allow the robotic devices to communicate information to a nearby user.
- The robotic devices also may include a communication module that enables the robotic devices to communicate with the
control unit 610, each other, and/or other devices. The communication module may be a wireless communication module that allows the robotic devices to communicate wirelessly. For instance, the communication module may be a Wi-Fi module that enables the robotic devices to communicate over a local wireless network at the property. The communication module further may be a 900 MHz wireless communication module that enables the robotic devices to communicate directly with thecontrol unit 610. Other types of short-range wireless communication protocols, such as Bluetooth, Bluetooth LE, Zwave, Zigbee, etc., may be used to allow the robotic devices to communicate with other devices in the property. - The robotic devices further may include processor and storage capabilities. The robotic devices may include any suitable processing devices that enable the robotic devices to operate applications and perform the actions described throughout this disclosure. In addition, the robotic devices may include solid state electronic storage that enables the robotic devices to store applications, configuration data, collected sensor data, and/or any other type of information available to the robotic devices.
- The robotic devices are associated with one or more charging stations. The charging stations may be located at predefined home base or reference locations in the property. The robotic devices may be configured to navigate to the charging stations after completion of tasks needed to be performed for the
monitoring system 600. For instance, after completion of a monitoring operation or upon instruction by thecontrol unit 610, the robotic devices may be configured to automatically fly to and land on one of the charging stations. In this regard, the robotic devices may automatically maintain a fully charged battery in a state in which the robotic devices are ready for use by themonitoring system 600. - The charging stations may be contact based charging stations and/or wireless charging stations. For contact based charging stations, the robotic devices may have readily accessible points of contact that the robotic devices are capable of positioning and mating with a corresponding contact on the charging station. For instance, a helicopter type robotic device may have an electronic contact on a portion of its landing gear that rests on and mates with an electronic pad of a charging station when the helicopter type robotic device lands on the charging station. The electronic contact on the robotic device may include a cover that opens to expose the electronic contact when the robotic device is charging and closes to cover and insulate the electronic contact when the robotic device is in operation.
- For wireless charging stations, the robotic devices may charge through a wireless exchange of power. In these cases, the robotic devices need only locate themselves closely enough to the wireless charging stations for the wireless exchange of power to occur. In this regard, the positioning needed to land at a predefined home base or reference location in the property may be less precise than with a contact based charging station. Based on the robotic devices landing at a wireless charging station, the wireless charging station outputs a wireless signal that the robotic devices receive and convert to a power signal that charges a battery maintained on the robotic devices.
- In some implementations, each of the robotic devices has a corresponding and assigned charging station such that the number of robotic devices equals the number of charging stations. In these implementations, the robotic devices always navigate to the specific charging station assigned to that robotic device. For instance, a first robotic device may always use a first charging station and a second robotic device may always use a second charging station.
- In some examples, the robotic devices may share charging stations. For instance, the robotic devices may use one or more community charging stations that are capable of charging multiple robotic devices. The community charging station may be configured to charge multiple robotic devices in parallel. The community charging station may be configured to charge multiple robotic devices in serial such that the multiple robotic devices take turns charging and, when fully charged, return to a predefined home base or reference location in the property that is not associated with a charger. The number of community charging stations may be less than the number of robotic devices.
- Also, the charging stations may not be assigned to specific robotic devices and may be capable of charging any of the robotic devices. In this regard, the robotic devices may use any suitable, unoccupied charging station when not in use. For instance, when one of the robotic devices has completed an operation or is in need of battery charge, the
control unit 610 references a stored table of the occupancy status of each charging station and instructs the robotic device to navigate to the nearest charging station that is unoccupied. - The
system 600 further includes one or moreintegrated security devices 680. The one or more integrated security devices may include any type of device used to provide alerts based on received sensor data. For instance, the one ormore control units 610 may provide one or more alerts to the one or more integrated security input/output devices. Additionally, the one ormore control units 610 may receive one or more sensor data from thesensors 620 and determine whether to provide an alert to the one or more integrated security input/output devices 680. - The
sensors 620, themodule 622, thecamera 630, thethermostat 634, and theintegrated security devices 680 communicate with thecontroller 612 overcommunication links sensors 620, themodule 622, thecamera 630, thethermostat 634, and theintegrated security devices 680 to thecontroller 612. Thesensors 620, themodule 622, thecamera 630, thethermostat 634, and theintegrated security devices 680 may continuously transmit sensed values to thecontroller 612, periodically transmit sensed values to thecontroller 612, or transmit sensed values to thecontroller 612 in response to a change in a sensed value. - The communication links 624, 626, 628, 632, 684, and 686 may include a local network. The
sensors 620, themodule 622, thecamera 630, thethermostat 634, and theintegrated security devices 680, and thecontroller 612 may exchange data and commands over the local network. The local network may include 802.11 “Wi-Fi” wireless Ethernet (e.g., using low-power Wi-Fi chipsets), Z-Wave, Zigbee, Bluetooth, “Homeplug” or other “Powerline” networks that operate over AC wiring, and a Category 5 (CATS) or Category 6 (CAT6) wired Ethernet network. The local network may be a mesh network constructed based on the devices connected to the mesh network. - The
monitoring application server 660 is an electronic device configured to provide monitoring services by exchanging electronic communications with thecontrol unit 610, the one ormore user devices alarm station server 670 over thenetwork 605. For example, themonitoring application server 660 may be configured to monitor events (e.g., alarm events) generated by thecontrol unit 610. In this example, the monitoring application server660 may exchange electronic communications with thenetwork module 614 included in thecontrol unit 610 to receive information regarding events (e.g., alerts) detected by the control unit server 104 a. Themonitoring application server 660 also may receive information regarding events (e.g., alerts) from the one ormore user devices - In some examples, the
monitoring application server 660 may route alert data received from thenetwork module 614 or the one ormore user devices alarm station server 670. For example, themonitoring application server 660 may transmit the alert data to the centralalarm station server 670 over thenetwork 605. - The
monitoring application server 660 may store sensor and image data received from the monitoring system and perform analysis of sensor and image data received from the monitoring system. Based on the analysis, themonitoring application server 660 may communicate with and control aspects of thecontrol unit 610 or the one ormore user devices - The central
alarm station server 670 is an electronic device configured to provide alarm monitoring service by exchanging communications with thecontrol unit 610, the one or moremobile devices monitoring application server 660 over thenetwork 605. For example, the centralalarm station server 670 may be configured to monitor alerting events generated by thecontrol unit 610. In this example, the centralalarm station server 670 may exchange communications with thenetwork module 614 included in thecontrol unit 610 to receive information regarding alerting events detected by thecontrol unit 610. The centralalarm station server 670 also may receive information regarding alerting events from the one or moremobile devices monitoring application server 660. - The central
alarm station server 670 is connected tomultiple terminals terminals alarm station server 670 may route alerting data to theterminals terminals alarm station server 670 and render a display of information based on the alerting data. For instance, thecontroller 612 may control thenetwork module 614 to transmit, to the centralalarm station server 670, alerting data indicating that asensor 620 detected motion from a motion sensor via thesensors 620. The centralalarm station server 670 may receive the alerting data and route the alerting data to the terminal 672 for processing by an operator associated with the terminal 672. The terminal 672 may render a display to the operator that includes information associated with the alerting event (e.g., the lock sensor data, the motion sensor data, the contact sensor data, etc.) and the operator may handle the alerting event based on the displayed information. - In some implementations, the
terminals FIG. 6 illustrates two terminals for brevity, actual implementations may include more (and, perhaps, many more) terminals. - The one or
more user devices user device 640 is a mobile device that hosts one or more native applications (e.g., the smart home application 642). Theuser device 640 may be a cellular phone or a non-cellular locally networked device with a display. Theuser device 640 may include a cell phone, a smart phone, a tablet PC, a personal digital assistant (“PDA”), or any other portable device configured to communicate over a network and display information. For example, implementations may also include Blackberry-type devices (e.g., as provided by Research in Motion), electronic organizers, iPhone-type devices (e.g., as provided by Apple), iPod devices (e.g., as provided by Apple) or other portable music players, other communication devices, and handheld or portable electronic devices for gaming, communications, and/or data organization. Theuser device 640 may perform functions unrelated to the monitoring system, such as placing personal telephone calls, playing music, playing video, displaying pictures, browsing the Internet, maintaining an electronic calendar, etc. - The
user device 640 includes asmart home application 642. Thesmart home application 642 refers to a software/firmware program running on the corresponding mobile device that enables the user interface and features described throughout. Theuser device 640 may load or install thesmart home application 642 based on data received over a network or data received from local media. Thesmart home application 642 runs on mobile devices platforms, such as iPhone, iPod touch, Blackberry, Google Android, Windows Mobile, etc. Thesmart home application 642 enables theuser device 640 to receive and process image and sensor data from the monitoring system. - The
user device 650 may be a general-purpose computer (e.g., a desktop personal computer, a workstation, or a laptop computer) that is configured to communicate with themonitoring application server 660 and/or thecontrol unit 610 over thenetwork 605. Theuser device 650 may be configured to display a smarthome user interface 652 that is generated by theuser device 650 or generated by themonitoring application server 660. For example, theuser device 650 may be configured to display a user interface (e.g., a web page) provided by themonitoring application server 660 that enables a user to perceive images captured by thecamera 630 and/or reports related to the monitoring system. AlthoughFIG. 6 illustrates two user devices for brevity, actual implementations may include more (and, perhaps, many more) or fewer user devices. - In some implementations, the one or
more user devices control unit 610 using thecommunication link 638. For instance, the one ormore user devices control unit 610 using various local wireless protocols such as Wi-Fi, Bluetooth, Zwave, Zigbee, HomePlug (ethernet over powerline), or wired protocols such as Ethernet and USB, to connect the one ormore user devices more user devices network 605 with a remote server (e.g., the monitoring application server 660) may be significantly slower. - Although the one or
more user devices control unit 610, the one ormore user devices control unit 610. In some implementations, the one ormore user devices control unit 610 and perform the functions of thecontrol unit 610 for local monitoring and long range/offsite communication. - In other implementations, the one or
more user devices control unit 610 through thenetwork 605. The one ormore user devices control unit 610 through thenetwork 605 or themonitoring application server 660 may relay data received from thecontrol unit 610 to the one ormore user devices network 605. In this regard, themonitoring application server 660 may facilitate communication between the one ormore user devices - In some implementations, the one or
more user devices more user devices control unit 610 directly (e.g., through link 638) or through the monitoring application server 660 (e.g., through network 605) based on a location of the one ormore user devices more user devices control unit 610 and in range to communicate directly with thecontrol unit 610, the one ormore user devices more user devices control unit 610 and not in range to communicate directly with thecontrol unit 610, the one ormore user devices monitoring application server 660. - Although the one or
more user devices network 605, in some implementations, the one ormore user devices network 605. In these implementations, the one ormore user devices - In some implementations, the one or
more user devices system 600 only includes the one ormore user devices sensors 620, themodule 622, thecamera 630, and the robotic devices. The one ormore user devices sensors 620, themodule 622, thecamera 630, and the robotic devices and sends data directly to thesensors 620, themodule 622, thecamera 630, and the robotic devices. The one ormore user devices - In other implementations, the
system 600 further includesnetwork 605 and thesensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices are configured to communicate sensor and image data to the one ormore user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices (or a component, such as a bridge/router) are intelligent enough to change the communication pathway from a direct local pathway when the one ormore user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices to a pathway overnetwork 605 when the one ormore user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices. In some examples, the system leverages GPS information from the one ormore user devices more user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices to use the direct local pathway or whether the one ormore user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices that the pathway overnetwork 605 is required. In other examples, the system leverages status communications (e.g., pinging) between the one ormore user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices to determine whether communication using the direct local pathway is possible. If communication using the direct local pathway is possible, the one ormore user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices using the direct local pathway. If communication using the direct local pathway is not possible, the one ormore user devices sensors 620, themodule 622, thecamera 630, thethermostat 634, and the robotic devices using the pathway overnetwork 605. - In some implementations, the
system 600 provides end users with access to images captured by thecamera 630 to aid in decision making. Thesystem 600 may transmit the images captured by thecamera 630 over a wireless WAN network to theuser devices system 600 uses several techniques to reduce costs while providing access to significant levels of useful visual information. - In some implementations, a state of the monitoring system and other events sensed by the monitoring system may be used to enable/disable video/image recording devices (e.g., the camera 630). In these implementations, the
camera 630 may be set to capture images on a periodic basis when the alarm system is armed in an “Away” state, but set not to capture images when the alarm system is armed in a “Stay” state or disarmed. In addition, thecamera 630 may be triggered to begin capturing images when the alarm system detects an event, such as an alarm event, a door-opening event for a door that leads to an area within a field of view of thecamera 630, or motion in the area within the field of view of thecamera 630. In other implementations, thecamera 630 may capture images continuously, but the captured images may be stored or transmitted over a network when needed. - The described systems, methods, and techniques may be implemented in digital electronic circuitry, computer hardware, firmware, software, or in combinations of these elements. Apparatus implementing these techniques may include appropriate input and output devices, a computer processor, and a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor. A process implementing these techniques may be performed by a programmable processor executing a program of instructions to perform desired functions by operating on input data and generating appropriate output. The techniques may be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program may be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language may be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and Compact Disc Read-Only Memory (CD-ROM). Any of the foregoing may be supplemented by, or incorporated in, specially designed ASICs (application-specific integrated circuits).
- It will be understood that various modifications may be made. For example, other useful implementations could be achieved if steps of the disclosed techniques were performed in a different order and/or if components in the disclosed systems were combined in a different manner and/or replaced or supplemented by other components. Accordingly, other implementations are within the scope of the disclosure.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/195,005 US11717231B2 (en) | 2017-11-29 | 2021-03-08 | Ultrasound analytics for actionable information |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762591920P | 2017-11-29 | 2017-11-29 | |
US16/205,037 US10943153B2 (en) | 2017-11-29 | 2018-11-29 | Ultrasound analytics for actionable information |
US17/195,005 US11717231B2 (en) | 2017-11-29 | 2021-03-08 | Ultrasound analytics for actionable information |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/205,037 Continuation US10943153B2 (en) | 2017-11-29 | 2018-11-29 | Ultrasound analytics for actionable information |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210287056A1 true US20210287056A1 (en) | 2021-09-16 |
US11717231B2 US11717231B2 (en) | 2023-08-08 |
Family
ID=64901667
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/205,037 Active 2039-01-13 US10943153B2 (en) | 2017-11-29 | 2018-11-29 | Ultrasound analytics for actionable information |
US17/195,005 Active 2039-06-15 US11717231B2 (en) | 2017-11-29 | 2021-03-08 | Ultrasound analytics for actionable information |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/205,037 Active 2039-01-13 US10943153B2 (en) | 2017-11-29 | 2018-11-29 | Ultrasound analytics for actionable information |
Country Status (5)
Country | Link |
---|---|
US (2) | US10943153B2 (en) |
EP (1) | EP3717980B1 (en) |
AU (1) | AU2018375154B2 (en) |
CA (1) | CA3083271A1 (en) |
WO (1) | WO2019108846A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11049404B2 (en) * | 2019-02-06 | 2021-06-29 | Motorola Mobility Llc | Methods and systems for unmanned aircraft monitoring in response to Internet-of-things initiated investigation requests |
US20210019545A1 (en) * | 2019-07-16 | 2021-01-21 | Honeywell International Inc. | Method and system to display object locations during a search and rescue operation |
FR3100075B1 (en) | 2019-08-23 | 2021-11-05 | Atos Integration | PERSON DETECTION DEVICE BY DRONE |
EP3836584B1 (en) * | 2019-12-13 | 2023-01-11 | Sony Group Corporation | Rescue support in large-scale emergency situations |
US11410420B1 (en) * | 2020-07-28 | 2022-08-09 | Wells Fargo Bank, N.A. | Enhancing branch opening and closing procedures using autonomous drone security and monitoring |
US20230047041A1 (en) * | 2021-08-10 | 2023-02-16 | International Business Machines Corporation | User safety and support in search and rescue missions |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170092109A1 (en) * | 2015-09-30 | 2017-03-30 | Alarm.Com Incorporated | Drone-augmented emergency response services |
US20180251122A1 (en) * | 2017-03-01 | 2018-09-06 | Qualcomm Incorporated | Systems and methods for operating a vehicle based on sensor data |
US20190095687A1 (en) * | 2017-09-28 | 2019-03-28 | At&T Intellectual Property I, L.P. | Drone data locker system |
US11423489B1 (en) * | 2015-06-17 | 2022-08-23 | State Farm Mutual Automobile Insurance Company | Collection of crash data using autonomous or semi-autonomous drones |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
IL138695A (en) * | 2000-09-26 | 2004-08-31 | Rafael Armament Dev Authority | Unmanned mobile device |
US20090048500A1 (en) * | 2005-04-20 | 2009-02-19 | Respimetrix, Inc. | Method for using a non-invasive cardiac and respiratory monitoring system |
US20140278478A1 (en) | 2013-03-13 | 2014-09-18 | Guardsman Scientific, Inc. | Systems and methods for analytics-based patient management |
US9456800B2 (en) | 2009-12-18 | 2016-10-04 | Massachusetts Institute Of Technology | Ultrasound scanning system |
US8983682B1 (en) * | 2012-12-28 | 2015-03-17 | Google Inc. | Unlocking mobile-device and/or unmanned aerial vehicle capability in an emergency situation |
US20170119347A1 (en) | 2015-06-19 | 2017-05-04 | Neural Analytics, Inc. | Robotic systems for control of an ultrasonic probe |
US10579863B2 (en) * | 2015-12-16 | 2020-03-03 | Global Tel*Link Corporation | Unmanned aerial vehicle with biometric verification |
US9987971B2 (en) * | 2016-07-29 | 2018-06-05 | International Business Machines Corporation | Drone-enhanced vehicle external lights |
CA3044602A1 (en) * | 2016-11-23 | 2018-05-31 | Alarm.Com Incorporated | Detection of authorized user presence and handling of unauthenticated monitoring system commands |
US10332407B2 (en) * | 2017-07-07 | 2019-06-25 | Walmart Apollo, Llc | Systems and methods for providing emergency alerts at emergency landing areas of unmanned aerial vehicles |
-
2018
- 2018-11-29 CA CA3083271A patent/CA3083271A1/en active Pending
- 2018-11-29 EP EP18827313.0A patent/EP3717980B1/en active Active
- 2018-11-29 WO PCT/US2018/063146 patent/WO2019108846A1/en unknown
- 2018-11-29 AU AU2018375154A patent/AU2018375154B2/en active Active
- 2018-11-29 US US16/205,037 patent/US10943153B2/en active Active
-
2021
- 2021-03-08 US US17/195,005 patent/US11717231B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11423489B1 (en) * | 2015-06-17 | 2022-08-23 | State Farm Mutual Automobile Insurance Company | Collection of crash data using autonomous or semi-autonomous drones |
US20170092109A1 (en) * | 2015-09-30 | 2017-03-30 | Alarm.Com Incorporated | Drone-augmented emergency response services |
US20180251122A1 (en) * | 2017-03-01 | 2018-09-06 | Qualcomm Incorporated | Systems and methods for operating a vehicle based on sensor data |
US20190095687A1 (en) * | 2017-09-28 | 2019-03-28 | At&T Intellectual Property I, L.P. | Drone data locker system |
Also Published As
Publication number | Publication date |
---|---|
CA3083271A1 (en) | 2019-06-06 |
AU2018375154B2 (en) | 2023-11-23 |
US11717231B2 (en) | 2023-08-08 |
EP3717980A1 (en) | 2020-10-07 |
US20190164019A1 (en) | 2019-05-30 |
US10943153B2 (en) | 2021-03-09 |
WO2019108846A1 (en) | 2019-06-06 |
EP3717980B1 (en) | 2024-05-08 |
AU2018375154A1 (en) | 2020-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11717231B2 (en) | Ultrasound analytics for actionable information | |
US11847896B2 (en) | Predictive alarm analytics | |
US12072706B2 (en) | Drone pre-surveillance | |
US12094314B2 (en) | Enhanced audiovisual analytics | |
US10943468B1 (en) | Method for allowing drone activity to modify event detection by a monitoring system | |
US20210123768A1 (en) | Automated mapping of sensors at a location | |
US20210241912A1 (en) | Intelligent detection of wellness events using mobile device sensors and cloud-based learning systems | |
US20240029531A1 (en) | Integrated security for multiple access control systems | |
US20220293278A1 (en) | Connected contact tracing | |
US20240005648A1 (en) | Selective knowledge distillation | |
US11550276B1 (en) | Activity classification based on multi-sensor input | |
US20230252874A1 (en) | Shadow-based fall detection | |
US20220222943A1 (en) | Intelligent pausing of recording by a property monitoring system | |
US20230039967A1 (en) | Airborne pathogen detection through networked biosensors | |
US11908308B2 (en) | Reduction of false detections in a property monitoring system using ultrasound emitter | |
US20210122467A1 (en) | Drone navigation and landing | |
US20230143370A1 (en) | Feature selection for object tracking using motion mask, motion prediction, or both |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: ALARM.COM INCORPORATED, VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DJIOFACK, INNOCENT;REEL/FRAME:055530/0206 Effective date: 20181229 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |