US20210097624A1 - Method and apparatus for increasing the density of data surrounding an event - Google Patents
Method and apparatus for increasing the density of data surrounding an event Download PDFInfo
- Publication number
- US20210097624A1 US20210097624A1 US17/071,722 US202017071722A US2021097624A1 US 20210097624 A1 US20210097624 A1 US 20210097624A1 US 202017071722 A US202017071722 A US 202017071722A US 2021097624 A1 US2021097624 A1 US 2021097624A1
- Authority
- US
- United States
- Prior art keywords
- state
- sensor
- event
- insurable event
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B21/00—Systems involving sampling of the variable controlled
- G05B21/02—Systems involving sampling of the variable controlled electric
-
- G06N7/005—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
Definitions
- This description relates to operation of sensor networks such as those used for security, intrusion and alarm systems installed on industrial or commercial or residential premises.
- the techniques automatically evaluate such data by analyzing such data in conjunction with additional data feeds from other sensors and other external data sources for various purposes.
- One such purpose is for a tool to automatically initiate an insurance claim that can be validated and paid.
- Described herein is a system that mines accumulated data and geographically-related data from systems deployed in a premises, and which produces predictions with respect to a risk level that either equipment or a user's actions relative to the equipment pose to the premises and/or the equipment and that sends appropriate control signals to sensors for enhanced monitoring of the premises.
- a computer program product tangibly stored on a computer readable hardware storage device for controlling operation of sensors at a physical premises includes instructions to cause a processor to receive a message corresponding to a prediction of an impending insurable event at the physical premises, process the received message according to an algorithm that is selected in accordance with the predicted insurable event, the algorithm producing one or more commands to modify operation of one or more specific sensors of a plurality of sensor devices that collect sensor information at the physical premises, send the commands that modify the operation of the one or more sensor devices at the physical premises at a period of time prior to a likely occurrence of the predicted insurable event, collect sensor information from the plurality of sensor devices deployed at the premises, and store the sensor information in a remote persistent storage system.
- aspects also include systems and methods.
- the message includes a calculated indication produced by instructions to continually analyze the collected sensor information by one or more unsupervised learning models to determine normal sets of states and drift states for the premises to produce the prediction of an occurrence of the insurable event.
- the algorithm to process the received message includes instructions to determine modifications of the operation of the one or more specific sensor devices at the identified premises according to an occurrence of a drift state and produce the messages including the commands that modify the operation of the one or more specific sensor devices from the determined modifications that are based on the drift state.
- the instructions to determine modifications further comprise instructions to analyze the prediction of the event, determine sensor devices that are in proximity to a location of the predicted event, determine modifications to sensor devices in proximity to the location, which modifications are based on the predicted event, specific locations of the sensor devices and specific types of the sensor devices, determine the commands based on the determined modifications and send the commands to the one or more specific sensor devices.
- the message is received from an external service.
- the computer program product also includes instructions to generate an insurance claim form by automatically populating a template insurance claim form with information required by the template insurance claim form.
- the computer program product further includes instructions to detect an actual occurrence of the insurable event based on actual sensor data received from the plurality of sensor devices to provide a trigger to generate an insurance claim form and generate an insurance claim form subsequent to the actual occurrence of the event by automatically populating a template insurance claim form with information required by the template insurance claim form.
- the computer program product further includes instructions to retrieve from a database, operational data for specified equipment that are insured by the insurance carrier, the operational data comprising service records, raw sensor data, and/or alerts generated for the specified equipment and augment the insurance claim form with a report that includes the operational data for the specified equipment at a time period prior to the event.
- the algorithm being of a weather-related event
- the computer program product further includes instructions to receive the indication from an external service, the indication being of the weather-related event, parse the received indication to produce a representation of the indication that identifies a type of weather-related event, analyze the parsed indication according to the location of the physical premises to produce a likely prediction of damage to the physical premises, and produce the commands to modify the operation of the one or more specific sensors of the plurality of sensors, according to the likely prediction of damage to the physical premises.
- the computer program product further includes instructions to produce a request to upload service and usage data for one or more monitored units within the premises to an external database and send to the system at the physical premises the request to upload to the external database, the service and usage data.
- the one or more specific sensors include one or more video cameras, and the algorithm, comprises instructions to receive current positioning information for each of the one or more video cameras, calculate based at least in part on the received indication repositioning information for the one or more video cameras, and send the repositioning information to at least some of the one or more video cameras to modify operation of the one or more video cameras by repositioning the at least some of the one or more video cameras.
- the one or more specific sensors include one or more video cameras
- the algorithm comprises instructions to receive current frame rate information for each of the one or more video cameras, frame rate information being information of the frequency at which images are taken and sent by the one or more video cameras, calculate based at least in part on the received indication modified frame rate information for the one or more video cameras and send the modified frame rate information to at least some of the one or more video cameras to modify the frame rate operation of the one or more video cameras.
- FIG. 1 is a schematic diagram of an exemplary networked security system.
- FIG. 2 is a block diagram of a sensor.
- FIG. 3 is a block diagram of a sensor based state prediction system.
- FIG. 3A is a diagram of a logical view of the sensor based state prediction system of FIG. 3 .
- FIG. 4 is a flow diagram of a state representation engine.
- FIG. 5 is a flow diagram of sensor based state prediction system processing.
- FIG. 5A is a flow diagram of training process for a Next state predictor engine that is part of the sensor based state prediction system.
- FIG. 5B is a flow diagram of a Next state predictor engine model building process.
- FIG. 6 is a flow diagram of operation processing by the sensor based state prediction system.
- FIG. 7 is a flow diagram of an example of sensor based risk profiling.
- FIG. 8 is a block diagram of a system architecture.
- FIG. 9 is a flow diagram of overview of indication based sensor control.
- FIG. 10 is a flow diagram of an example of indication based sensor control.
- FIG. 11 is a flow diagram of an example of sensor based augmented claim filing.
- FIG. 11A is a block diagram of an exemplary format of supplemental data to augment a claim form.
- detectors/sensors 28 include motion detectors, glass break detectors, noxious gas sensors, smoke/fire detectors, contact/proximity switches, video sensors, such as camera, audio sensors such as microphones, directional microphones, temperature sensors such as infrared sensors, vibration sensors, air movement/pressure sensors, chemical/electro-chemical sensors, e.g., VOC (volatile organic compound) detectors.
- those systems sensors may include weight sensors, LIDAR (technology that measures distance by illuminating a target with a laser and analyzing the reflected light), GPS (global positioning system) receivers, optical, biometric sensors, e.g., retina scan sensors, EGG/Heartbeat sensors in wearable computing garments, network hotspots and other network devices, and others.
- LIDAR technology that measures distance by illuminating a target with a laser and analyzing the reflected light
- GPS global positioning system
- biometric sensors e.g., retina scan sensors, EGG/Heartbeat sensors in wearable computing garments, network hotspots and other network devices, and others.
- the surveillance/intrusion/fire/access systems employ wireless sensor networks and wireless devices, with remote, cloud-based server monitoring and report generation.
- the wireless sensor networks wireless links between sensors and servers, with the wireless links usually used for the lowest level connections (e.g., sensor node device to hub/gateway).
- the edge (wirelessly-connected) tier of the network is comprised sensor devices that provide specific sensor functions. These sensor devices have a processor and memory, and may be battery operated and include a wireless network card.
- the edge devices generally form a single wireless network in which each end-node communicates directly with its parent node in a hub-and-spoke-style architecture.
- the parent node may be, e.g., a network access point (not to be confused with an access control device or system) on a gateway or a sub-coordinator which is, in turn is connected to the access point or another sub-coordinator.
- the wireless sensor network 10 is a distributed network that is logically divided into a set of tiers or hierarchical levels 12 a - 12 c .
- an upper tier or hierarchical level 12 a of the network are disposed servers and/or virtual servers 14 running a “cloud computing” paradigm that are networked together using well-established networking technology such as Internet protocols or which can be private networks that use none or part of the Internet.
- Applications that run on those servers 14 communicate using various protocols such as for Web Internet networks XML/SOAP, RESTful web service, and other application layer technologies such as HTTP and ATOM.
- the distributed network 10 has direct links between devices (nodes) as shown and discussed below.
- hierarchical level 12 a includes a central monitoring station 49 comprised of one or more of the server computers 14 and which includes or receives information from a sensor based state prediction system 50 as will be described below.
- the distributed network 10 includes a second logically divided tier or hierarchical level 12 b , referred to here as a middle tier that involves gateways 16 located at central, convenient places inside individual buildings and structures. These gateways 16 communicate with servers 14 in the upper tier whether the servers are stand-alone dedicated servers and/or cloud based servers running cloud applications using web programming techniques.
- the middle tier gateways 16 are also shown with both local area network 17 a (e.g., Ethernet or 802.11) and cellular network interfaces 17 b.
- the distributed network topology also includes a lower tier (edge layer) 12 c set of devices that involve fully-functional sensor nodes 18 (e.g., sensor nodes that include wireless devices, e.g., transceivers or at least transmitters, which in FIG. 1 are marked in with an “F”), as well as wireless sensor nodes or sensor end-nodes 20 (marked in the FIG. 1 with “C”).
- fully-functional sensor nodes 18 e.g., sensor nodes that include wireless devices, e.g., transceivers or at least transmitters, which in FIG. 1 are marked in with an “F”
- wireless sensor nodes or sensor end-nodes 20 marked in the FIG. 1 with “C”.
- wired sensors can be included in aspects of the distributed network 10 .
- the edge (wirelessly-connected) tier of the network is largely comprised of devices with specific functions. These devices have a small-to-moderate amount of processing power and memory, and often are battery powered, thus requiring that they conserve energy by spending much of their time in sleep mode.
- a typical model is one where the edge devices generally form a single wireless network in which each end-node communicates directly with its parent node in a hub-and-spoke-style architecture.
- the parent node may be, e.g., an access point on a gateway or a sub-coordinator which is, in turn, connected to the access point or another sub-coordinator.
- Each gateway is equipped with an access point (fully functional sensor node or “F” sensor node) that is physically attached to that access point and that provides a wireless connection point to other nodes in the wireless network.
- the links (illustrated by lines not numbered) shown in FIG. 1 represent direct (single-hop MAC layer) connections between devices.
- a formal networking layer (that functions in each of the three tiers shown in FIG. 1 ) uses a series of these direct links together with routing devices to send messages (fragmented or non-fragmented) from one device to another over the network.
- the sensors 20 are sensor packs (not shown) that are configured for a particular types of business applications, whereas in other implementations the sensors are found in installed systems such as the example security systems discussed below.
- Sensor device 20 includes a processor device 21 a , e.g., a CPU and or other type of controller device that executes under an operating system, generally with 8-bit or 16-bit logic, rather than the 32 and 64-bit logic used by high-end computers and microprocessors.
- the device 20 has a relatively small flash/persistent store 21 b and volatile memory 21 c in comparison with other the computing devices on the network.
- the persistent store 21 b is about a megabyte of storage or less and volatile memory 21 c is about several kilobytes of RAM memory or less.
- the device 20 has a network interface card 21 d that interfaces the device 20 to the network 10 .
- the device 20 also includes a sensor element 22 and a sensor interface 22 a that interfaces to the processor 21 a .
- Sensor 22 can be any type of sensor types mentioned above.
- Panel 38 may be part of an intrusion detection system (not shown).
- the panel 38 i.e., intrusion detection panel is coupled to plural sensors/detectors 20 ( FIG. 1 ) disbursed throughout the physical premises.
- the intrusion detection system is typically in communication with a central monitoring station (also referred to as central monitoring center not shown) via one or more data or communication networks (not shown).
- Sensor/detectors may be hard wired or communicate with the panel 38 wirelessly. In general, detectors sense glass breakage, motion, gas leaks, fire, and/or breach of an entry point, and send the sensed information to the panel 38 .
- the panel 38 e.g., intrusion detection panel determines whether to trigger alarms and/or sending alarm messages to the monitoring station 20 .
- a user may access the intrusion detection panel to control the intrusion detection system, e.g., disarm, arm, enter predetermined settings, etc.
- Other systems can also be deployed such as access control systems, etc.
- the prediction system 50 executes on one or more of the cloud-based server computers and accesses database(s) 51 that store sensor data and store state data in a state transition matrix.
- database(s) 51 that store sensor data and store state data in a state transition matrix.
- dedicated server computers could be used as an alternative.
- the sensor based state prediction system 50 includes a State Representation Engine 52 .
- the State Representation Engine 52 executes on one or more of the servers described above and interfaces on the servers receive sensor signals from a large plurality of sensors deployed in various premises throughout an area. These sensor signals have sensor values and together with other monitoring data represent a data instance for a particular area of a particular premises in a single point in time. The data represent granular information collected continuously from the particular premises. The State Representation Engine takes these granular values and converts the values into a semantic representation.
- a set of sensor values and monitoring data for particular time duration are assigned a label, e.g., “State-1.”
- this Engine 52 works in an unsupervised manner, as discussed below, to determine various states that may exist in the premises.
- this Engine 52 determines state transition metrics that are stored in the form a state transition matrix.
- a simple state transition matrix has all the states in its rows and columns, with cell entries being many times did the premises move from a state in cell i to a state in cell j are over a period of time and/or events. This matrix captures the operating behavior of the system. State transitions can happen either over time or due to events. Hence, the state transition metrics are captured using both time and events.
- a state is a representation of a group of sensors grouped according to a clustering algorithm.
- the State transition matrix is a data structure that stores how many times the environment changed from State_i to State_j.
- the State transition matrix thus stores “knowledge” that the sensor based state prediction system 50 captures and which is used to determine predictions of the behavior of the premises.
- the State transition matrix is accessed by the Next prediction engine to make decisions and trigger actions by the sensor based state prediction system 50 .
- Unsupervised learning e.g., clustering is used to group sensor readings into states and conditions over a period of time that form a time trigger state and over events to form an event trigger state. Used to populate the state transition matrix per premises.
- State transitions expressed as a listing by instance with pointer to the state time and event trigger tables.
- Entries x,y in cells of the State transition matrix are pointers that corresponds to the trigger tables that store the number of time periods and events respectively for each particular cell of the State transition matrix.
- the State time trigger is depicted below.
- the State time trigger tracks the time periods t1 . . . t8 for each state transition corresponding to the number x in each particular cell.
- State event trigger tracks the event E1 . . . E2 for each state transition corresponding to the number y in each particular cell (if any).
- the State Representation Engine 52 in addition to populating the State transition matrix, also populates a State time trigger that is a data structure to store, the time value spent in each state and a distribution of the time duration for each state. Similar to the State transition matrix, the State time trigger also encapsulates the behavior knowledge of the environment. State transitions can be triggered using these values.
- the State Representation Engine 52 also populates a State event trigger.
- the State event trigger is a data structure to store, event information.
- An example of an event can be sensor on a door sensing that a door was opened. There are many other types of events. This data structure captures how many times such captured events caused a state transition.
- the State Representation Engine 52 populates the State Transition matrix and the State Time and State triggers, which together capture metrics, which provide a Knowledge Layer of the operational characteristics of the premises.
- the sensor based state prediction system 50 also includes a Next State Prediction Engine 54 .
- the Next State Prediction Engine 54 predicts an immediate Next state of the premises based the state transition matrix.
- the Next State Prediction Engine 54 predicts if the premises will be in either a safe state or a drift state over a time period in the future.
- the term “future” as used herein refers to a defined window of time in the future, which is defined so that a response team has sufficient time to address a condition that is predicted by the Next State Prediction Engine 54 that may occur in the premises to restore the state of the premises to a normal state.
- the Next State Prediction Engine operates as a Decision Layer in the sensor.
- the sensor based state prediction system 50 also includes a State Representation graphical user interface generator 56 .
- State Representation graphical user interface generator 56 provides a graphical user interface that is used by the response team to continuously monitor the state of the premises.
- the State Representation graphical user interface generator 56 receives data from the Next State Prediction Engine 54 to graphically display whether the premises is either in the safe state or the drifting state.
- the State Representation graphical user interface generator 56 operates as an Action Layer, where an action is performed based on input from Knowledge and Decision Layers.
- the sensor based state prediction system 50 applies unsupervised algorithm learning models to analyze historical and current sensor data records from one or more customer premises and generates a model that can predict Next patterns, anomalies, conditions and events over a time frame that can be expected for a customer site.
- the sensor based state prediction system 50 produces a list of one or more predictions that may result in on or more alerts being sent to one more user devices as well as other computing system, as will be described.
- the prediction system 50 uses various types of unsupervised machine learning models including Linear/Non-Linear Models, Ensemble methods etc.
- FIG. 3A a logical view 50 ′ of the sensor based state prediction system 50 is shown.
- the middle layer is an abstraction layer that abstracts these raw events as state (represented in FIG. 5A by the blocks “States” (State Representation Engine 52 ), STM (State Transition Matrix), STT (State Time Trigger) and SET (State Event Trigger) that produce a state as a concise semantic representation of the underlying behavior information of the environment described by time and various sensor values at that point in time.
- States State Representation Engine 52
- STM State Transition Matrix
- STT State Time Trigger
- SET State Event Trigger
- the State Representation Engine 55 collects 62 (e.g., from the databases 51 or directly from interfaces on the servers) received sensor signals from a large plurality of sensors deployed in various premises throughout an area that is being monitored by the sensor based state prediction system 50 .
- the sensor data collected from the premises includes collected sensor values and monitoring data values.
- the State Representation Engine 52 converts 64 this sensor data into semantic representations of the state of the premises at instances in time.
- the State Representation Engine 52 uses 66 the converted sensor semantic representation of the sensor data collected from the premises to determine the empirical characteristics of the premises.
- the State Representation Engine 52 assigns 67 an identifier to the state.
- the kitchen in a restaurant example for a premises identified in the system as “Site no.: 448192” uses the sensor values to produce a first state that is identified here as “State 1.” Any labelling can be used and is typically consecutive identified and this state is semantically described as follows:
- the semantic description includes the identifier “State 1” as well as semantic descriptions of the various sensors, their values and dates and times.
- the State Representation Engine 52 determines an abstraction of a collection of “events” i.e., the sensor signals as state.
- the state thus is a concise representation of the underlying behavior information of the premises being monitored, described by time and data and various sensor values at that point in time and at that date.
- the semantic representation of the state is stored 68 by the State Representation Engine 52 as state transition metrics in the State Representation matrix. Over time and days, as the sensors produce different sensor values, the State Representation Engine 55 determines different states and converts these states into semantic representations that are stored the state transition metrics in the matrix, e.g., as in a continuous loop 70 .
- the kitchen example is further set out below:
- the State Representation Engine 52 collects the following data (fictitious data) from these three sensors at a particular points in time,
- the state representation engine 52 converts these raw values into state definitions and assigns (labels) each with a unique identifier for each state, as discussed above. As the premises is operated over a period of time, the Next transition matrix, the state time trigger matrix and the state event trigger matrix are filled.
- the state representation engine 52 produces the following two states (State 1 is repeated here for clarity in explanation).
- the state representation engine 52 adds to the state transition matrix an entry that corresponds to this transition, that the premises moved from state 1 to state 2.
- the state representation engine 52 also adds to the state transition matrix in that entry, an indicator that the transition was “time trigger,” causing the movement, and thus the state representation engine 52 adds an entry in state time trigger matrix.
- the state representation engine 52 thus co-ordinates various activities inside the premises under monitoring and captures/determines various operating characteristics of the premises.
- processing 80 for the Next State Prediction Engine 54 is shown.
- This processing 80 includes training processing 80 a ( FIG. 5A ) and model building processing 80 b ( FIG. 5B ), which are used in operation of the sensor based state prediction system 50 .
- training processing 80 ′ trains the Next State Prediction Engine 54 .
- the Next State Prediction Engine 54 accesses 82 the state transition matrix and retrieves a set of states from the state transition matrix. From the retrieved set of states the Next State Prediction Engine 54 generates 84 a list of most probable state transitions for a given time period, the time period can be measured in minutes, hours, days, weeks, months, etc. For example, consider the time period as a day. After a certain time period of active usage, the sensor based state prediction system 50 , through the state representation engine 52 , has acquired knowledge states s1 to s5.
- Markov property is used in probability and statistics and refers to the “memoryless” property of a stochastic process.
- the Next State Prediction Engine 54 determines 86 if a current sequence is different than an observed sequence in the list above. When there is a difference, the Next State Prediction Engine 54 determines 88 whether something unusual has happened in the premises being monitored or whether the state sequence is a normal condition of the premises being monitored.
- Next State Prediction Engine 54 90 these state transitions as “safe” or “drift state” transitions. Either the Next State Prediction Engine 54 or manual intervention is used to label either at the state transition level or the underlying sensor value levels (fictitious) for those state transitions producing the follow:
- G is used to indicate green, e.g., a normal operating state, e.g., “a safe state” and “Y” is used to indicate yellow, e.g., an abnormal or drift state, e.g., an “unsafe state” and “R” (not shown above) would be used to represent red or a known unsafe state.
- This data and states can be stored in the database 51 and serves as training data for a machine learning model that is part of the Next State Recommendation Engine 54 .
- the model building processing 80 b uses the above training data to build a model that classify a system's state into either a safe state or an unsafe state.
- Other states can be classified.
- three states can be defined, as above, “G Y R states” or green (safe state) yellow (drifting state) and red (unsafe state).
- G Y R states green (safe state) yellow (drifting state)
- red unsafe state
- the model building processing 80 b accesses 102 the training data and applies 104 one or more machine learning algorithms to the training data to produce the model that will execute in the Next State Recommendation Engine 54 during monitoring of systems.
- Machine learning algorithms such as Linear models and Non-Linear Models, Decision tree learning, etc., which are supplemented with Ensemble methods (where two or more models votes are tabulated to form a prediction) and so forth can be used. From this training data and the algorithms, the model is constructed 106 .
- fictitious Decision Tree using the above fictitious data (again where “G” is used to indicate green, “a safe state” e.g., a normal operating state, and “Y” is used to indicate yellow, e.g., drifting state, and “R” (shown below) to represent red or a known unsafe state.
- This data and states can be stored in the database 51 and serves as training data for a machine learning model that is part of the Next State Recommendation Engine 54 .
- Empirical characteristics can be a model based and human based are determined 106 for various states of the premises in terms of, e.g., safety of the occupants and operational conditions of the various systems within the premises.
- Examples of such systems include intrusion detection systems, fire alarm systems, public annunciation systems, burglar alarm systems, the sensors deployed at the premises, as well as other types of equipment, such as refrigeration equipment, stoves, and ovens that may be employed in the kitchen example that will be discussed below.
- Other instances of particular premises will have other types of systems that are monitored.
- the sensor based state prediction system 50 will determine the overall state of the premises as well as individual states of the various systems within the premises being monitored, as will be discussed below.
- the sensor based prediction system 50 receives 102 (by the State Representation Engine 52 ) sensor signals from a large plurality of sensors deployed in various premises throughout an area being monitored.
- the State Representation Engine 52 converts 104 the sensor values from these sensor signals into a semantic representation that is identified, as discussed above.
- this Engine 52 works in an unsupervised manner to determine various states that may exist in sensor data being received from the premises.
- the State Representation Engine 52 also determines 106 state transition metrics that are stored in the state transition matrix using both time and events populating the State time trigger and the State event trigger, as discussed above.
- the State transition matrix is accessed by the Next prediction engine 54 to make decisions and trigger actions by the sensor based state prediction system 50 .
- the Next State Prediction Engine 54 receives the various states (either from the database and/or from the State Representation Engine 52 and forms 108 predictions of an immediate Next state of the premises/systems based the state data stored in the state transition matrix. For such states the Next State Prediction Engine 54 predicts if the premises will be in either a safe state or a drift state over a time period in the Next as discussed above.
- the sensor based state prediction system 50 also sends 110 the predictions to the State Representation engine 56 that generates a graphical user interface to provide a graphical user interface representation of predictions and states of various premises/systems.
- the state is tagged 112 and stored 114 in the state transition matrix.
- the sensor based state prediction system 50 using the State Representation Engine 52 that operates in a continuous loop to generate new states and the Next State Prediction Engine 54 that produces predictions together continually monitor the premises/systems looking for transition instances that result in drift in states that indicate potential problem conditions. As the sensors in the premises being monitored operate over a period of time, the state transition matrix, the state time trigger matrix and the state event trigger matrix are filled by the state representation engine 52 and the Next State Prediction Engine 54 processing 80 improves on predictions.
- the sensor based state prediction system 50 determines the overall state of the premises and the systems by classifying the premises and these systems into a normal or “safe” state and the drift or unsafe state. Over a period of time, the sensor based state prediction system 50 collects information about the premises and the sensor based state prediction system 50 uses this information to construct a mathematical model that includes a state representation, state transitions and state triggers.
- the state triggers can be time based triggers and event based triggers, as shown in the data structures above.
- the sensor-based state prediction system 50 receives 122 sensor data from sensors monitoring each physical object or physical quantity from the sensors ( FIG. 2 ) deployed in a premises.
- the sensor-based state prediction system 50 is configured 124 with an identity of the premises and the physical objects being monitored by the sensors in the identified premises.
- the sensor based state machine 50 processes 126 the received sensor data to produce states as set out above using the unsupervised learning models. Using these models the sensor-based state prediction system 50 monitors various physical elements to detect drift states.
- one of the sensors can be a vibration sensor that sends the sensor-based state prediction system 50 a signal indicating a level of detected vibration from the vibration sensor. This signal indicates both magnitude and frequency of vibration.
- the sensor-based state prediction system 50 determines over time normal operational levels for that sensor based on what system that sensor is monitoring and together with other sensors produces 128 series of states for the object and/or premises. These states are associated 130 with either a state status of “safe” or “unsafe” (also referred to herein as “normal” or “drift,” respectively). Part of this process of associating is provided by the learning process and this associating can be empirically determined based on human input.
- This processing thus develops more than a mere envelope or range of normal vibration amplitude and vibration frequency indications for normal operation for that particular vibration sensor, but rather produces a complex indication of a premises or object state status by combining these indications for that sensor with other indications from other sensors to produce the state transition sequences mentioned above.
- States are produced from the unsupervised learning algorithms (discussed above in FIGS. 5-5B ) based on that vibration sensor and states from other sensors, which are monitoring that object/premises.
- the unsupervised learning algorithms continually analyze that collected vibration data and producing state sequences and analyze state sequences that include that sensor. Overtime, as the analysis determines 134 that states including that sensor have entered into a drift state that corresponds to an unsafe condition, the sensor-based state prediction system 50 determines 136 a suitable action alert (in the Action layer) to indicate to a user that there may be something wrong with the physical object being monitored by that sensor.
- the analysis provided by the prediction system sends the alert to indicate that there is something going wrong with object being monitored.
- the sensor-based state prediction system 50 produces suggested actions 138 that the premises' owner should be taking with respect to the object being monitored.
- FIG. 8 an architecture 139 that combines the sensor-based state prediction system 50 ( FIG. 5 ) in a cooperative relationship with business application servers 139 a in the cloud is shown.
- the sensor-based state prediction system 50 receives sensor data from the sensor network 11 (or storage 51 ) for a particular premises, processes that data to produce states and state sequences, and uses this information in conjunction with event indications that can be calculated, premises event driven, and/or external from system 125 as well as business application servers to process augmented claims submissions for insured events under an insurance policy using an augmented claim filing module 220 .
- the event prediction module 140 (which can be part of the sensor-based state prediction system 50 or a separate computer system) receives 142 external inputs/notifications and inputs from the sensor-based state prediction system 50 .
- the event prediction module 140 analyzes 144 these external inputs and predictions from the sensor-based state prediction system 50 . Based on the received external input/notification and inputs from the state-based prediction engine, the event prediction module selects one or more algorithms for processing of data.
- the event prediction module 140 produces 146 control signals to control specific sensors in the premises to modify the manner in which the specific sensors sense conditions external to and within the premises.
- an augmented claim process 160 for supporting an insurance claim upon an occurrence of an insurable event at a physical premises is shown.
- This process 160 can be executed on a computer system (not shown) or by the sensor-based state prediction system 50 .
- the process 160 supports an insurance claim filing upon an occurrence of an insurable event at a physical premises.
- the process 160 receives 162 an indication of an impending insurable event that may affect the physical premises.
- the received indication is processed 164 according to an algorithm that is selected in accordance with the insurable event.
- the algorithm produces 166 one or more messages that are sent 168 to various sensors (determined from the algorithm as discussed below). These messages in the form of data packets including address information and a payload are parsed 170 at the sensors and provide commands 172 that modify operation of one or more specific sensors of the sensors of, e.g., FIG. 1 .
- messages can include the following format:
- Reposition command ⁇ IP address of camera> ⁇ reposition> ⁇ position data>
- Frame rate command ⁇ IP address of camera> ⁇ rate> ⁇ rate data>
- Other commands can include turning on a sensor that is normally in a sleep mode or requesting a reading from a sensor that is in a periodic schedule mode, where a reading from the sensor is required more frequently than scheduled. Many other commands can be provided.
- the process sends the commands that modify the operation of one or more sensors at the physical premises at a period of time prior to a likely occurrence of the insurable event.
- the process can receive an indication that is a calculated indication produced by the sensor-based state prediction system 50 .
- the sensor-based state prediction system 50 collects sensor information from the plurality of sensors deployed at the premises and continually analyzes the collected sensor information by one or more unsupervised learning models to determine normal sets of states and drift states for the premises. Upon detection of a prediction of a likely occurrence of one or more drift states, as correlated to normal operation at the premises, this calculated indication of the occurrence of a drift state is used to determine which of the sensors will have a modified operation and what the modified operation will be.
- the computer can cause other sensors in the vicinity of the Stove in the Kitchen to wake up, and possible reposition video cameras to that area.
- the process 160 will access a stored computer representation (e.g., as a graph or the like) that provides a mapping of all sensors in a premises such that the computer determines what sensors to modify operation of and what the modifications would be.
- a stored computer representation e.g., as a graph or the like
- the sensor-based state prediction system 50 produces a drift state
- video cameras Upon detection of an intrusion into the premises, video cameras can be repositioned by the computer producing corresponding commands.
- Such algorithms would include instructions to cause the computer to analyze the indication according to sensor data received from sensors at the physical premises to produce a likely prediction of the event at the physical premises, produce the commands to modify the operation of the one or more specific sensors of the plurality of sensors according to the likely prediction of the event at the physical premises, and send the commands to the one or more specific sensors of the plurality of sensors.
- the indication is a received indication from an external service or source such as a weather service.
- the indication is an input to the computer and/or the sensor-based state prediction system 50 .
- the computer and/or the sensor-based state prediction system collects sensor information from the plurality of sensors deployed at the premises and analyzes the indication received from the external service by applying one or more unsupervised learning models to predict one or more drift states that result from collected sensor information, normal sets of states for the premises, and the received indication.
- the sensor-based state prediction system 50 correlates the prediction of the occurrence of a drift state to determine which of the sensors at the premises to modify the operation of.
- the sensor-based state prediction system 50 produces the commands to modify the operation of specific sensors according to the indication, the drift state and the representation of the premises.
- the computer system can generate an insurance claim form subsequent to occurrence of the event by automatically populating a template insurance claim form with information required of the insurance claim form according to the template.
- the computer system retrieves from a database, operational data for specified equipment that are insured by the insurance carrier, the operational data comprising service records, raw sensor data, and alerts generated for the specified equipment and augments the insurance claim form with a report that includes the operational data for the specified equipment at a time period prior to the event.
- the computer executes an algorithm that processes a weather-related event, when the indication is a received indication from an external service that indicates a weather-related event.
- Being a weather related event can be determined either by the computer receiving the indication from particular sources and/or by parse the received indication to produce a representation of the indication that identifies the type of weather-related event.
- the computer analyzes the parsed indication according to the location of the physical premises to produce a likely prediction of damage to the physical premises and produces the commands to modify the operation of specific sensors according to the likely prediction of damage to the physical premises. As an example, if the prediction predicts damage to a specific portion of the premises the sensors in that portion can have their operation modified.
- the computer produces a request to upload service and usage data from specific monitored units within the premises to an external cloud based database, by sending to the system at the physical premises the request to upload to the external database, the service and usage data.
- the computer can for video cameras, execute an algorithm that receives (or accesses from a database) current positioning information for each of the one or more video cameras and calculates based at least in part on the received indication repositioning information for the one or more video cameras.
- the computer sends the repositioning information to at least some of the one or more video cameras to modify operation of the one or more video cameras by repositioning the at least some of the one or more video cameras.
- the computer can modify frame (or resolution or other parameters) by receiving (or accessing) current frame rate information for the video cameras.
- the frame rate information the frequency at which images are taken and sent by the video cameras.
- the computer calculates based at least in part on the received indication, modified frame rate information for the video cameras, and sends the modified frame rate information to the corresponding video cameras to modify the frame rate operation of such video cameras.
- the above techniques allow the intrusion detection system, or the like, for example, to capture video data prior to and after a potential identified incident, and combine the captured video with additional data feeds such that an insurance claim can be generated, validated and paid.
- the techniques involve increasing the “density” of sensor data surrounding the occurrence of an event such that the analysis of that data can be used by an insurance company to automate the evaluation of an insurance claim. This approach provides a minimal increase in overall processing and network traffic because the increase in sensor data is controlled by the system to occur over a time period prior to and after a potential identified incident.
- an insurance company can use an automated process that combines video data (such as automatically generated video primitives) with other available data streams such as weather data, seismic data, crime statistics, maintenance records, sensor data (intrusion, fire, vibration, temperature, humidity etc.), and other data that is specifically stored for a pre-set time before the incident and for a pre-set time after the incident. All of the data that is recorded is reduced to quantifiable values that can easily be evaluated with respect to ranges and/or coverages set out in an insurance policy. When the policy is set up, criteria for acceptable ranges of sensor data are also set up such that upon a system validating that the sensor data exceeded the established ranges, an immediate payment can be made.
- the augmented claim filing module 220 executes processing 220 ′ as shown.
- the sensor-based state prediction system 50 can be used in conjunction with an insurance claim module to populate and submit an insurance claim or at least supporting documentation upon an occurrence of an insured event.
- the insurance claim module in the sensor-based state prediction system 50 receives sensor data 222 for each physical object or physical quantity being monitored based on one or more sets of data from sensors ( FIG. 2 ) or sensor packs ( FIG. 16 ).
- the insurance claim module 226 prepares an electronic report that can be used to supplement or provide the insurance claim.
- the insurance claim module receives 228 a triggering message that triggers the insurance claim module to prepare an insurance claim(s) on for a business that suffered an insured loss.
- the insurance module is triggered by the sensor-based state prediction system 50 detecting a state indicative of a loss or by an owner or owner system starting an insurance claim process.
- the insurance claim module parses 230 the triggering message to extract information regarding the insured loss to extract identifying information regarding the premises that were insured, the nature of the loss, the insurance carrier, etc., as well as other generally conventional information.
- the insurance claim module constructs 232 a request to the sensor-based state prediction system 50 to extract 236 service and usage data for one or more monitored units within the premises, and sends 234 the request to the sensor-based state prediction system 50 .
- the sensor-based state prediction system 50 extracts service record data for each system within the premises, as well as states of the system/premises prior to the incident and/or actual sensor data of sensors monitoring the system/premises prior to the incident.
- the insurance claim module generates 238 an insurance claim form from a template form used by the insurance carrier for the particular premises.
- the insurance claim module 50 fills in the template with the conventional information such as the policy number, address, policyholder, etc.
- the insurance claim module however also provides 240 either in the template form or as a supplemental form, the extracted operational data for each specific piece of equipment based upon service and usage records retrieved from the database 51 and sensor states prior to and subsequent to the insured event.
- the format of this supplemental form can take various configurations. One such configuration is shown in FIG. 11A .
- the populated claim form (or the populated supplemental form, i.e., supporting documentation for the insurance claim form) is populated with the premises and system/equipment ID's and the extracted operational data that shows operational performance of the system/equipment ID before the event and after the event.
- the populated claim form or the populated supplemental form also will show whether damaged, monitored systems were running properly, properly serviced etc., based on actual sensor data and historical service record data, as information provided are the actual conditions of the premises as measured by the sensor data and the calculated states as determined by the sensor based prediction system 50 showing the events before the insured event happened and possibly during the insured event. This could benefit customer by yielding more accurate reimbursement of insurance funds depending on the type of insurance coverage.
- a set of records are provided for historical state transitions (several before and during and after event, if any), sensor semantic records, and service records all pertaining to the specific ID equipment/system.
- Servers interface to the sensor based state prediction system 50 via a cloud computing configuration and parts of some networks can be run as sub-nets.
- the sensors provide in addition to sensor data, detailed additional information that can be used in processing of sensor data evaluate.
- a motion detector could be configured to analyze the heat signature of a warm body moving in a room to determine if the body is that of a human or a pet. Results of that analysis would be a message or data that conveys information about the body detected.
- Various sensors thus are used to sense sound, motion, vibration, pressure, heat, images, and so forth, in an appropriate combination to detect a true or verified alarm condition at the intrusion detection panel.
- Recognition software can be used to discriminate between objects that area human and objects that are an animal; further facial recognition software can be built into video cameras and used to verify that the perimeter intrusion was the result of a recognized, authorized individual.
- video cameras would comprise a processor and memory and the recognition software to process inputs (captured images) by the camera and produce the metadata to convey information regarding recognition or lack of recognition of an individual captured by the video camera.
- the processing could also alternatively or in addition include information regarding characteristic of the individual in the area captured/monitored by the video camera.
- the information would be either metadata received from enhanced motion detectors and video cameras that performed enhanced analysis on inputs to the sensor that gives characteristics of the perimeter intrusion or a metadata resulting from very complex processing that seeks to establish recognition of the object.
- Sensor devices can integrate multiple sensors to generate more complex outputs so that the intrusion detection panel can utilize its processing capabilities to execute algorithms that analyze the environment by building virtual images or signatures of the environment to make an intelligent decision about the validity of a breach.
- Memory stores program instructions and data used by the processor of the intrusion detection panel.
- the memory may be a suitable combination of random access memory and read-only memory, and may host suitable program instructions (e.g. firmware or operating software), and configuration and operating data and may be organized as a file system or otherwise.
- the stored program instruction may include one or more authentication processes for authenticating one or more users.
- the program instructions stored in the memory of the panel may further store software components allowing network communications and establishment of connections to the data network.
- the software components may, for example, include an internet protocol (IP) stack, as well as driver components for the various interfaces. Other software components suitable for establishing a connection and communicating across network will be apparent to those of ordinary skill.
- IP internet protocol
- Servers include one or more processing devices (e.g., microprocessors), a network interface and a memory (all not illustrated). Servers may physically take the form of a rack mounted card and may be in communication with one or more operator terminals (not shown).
- An example monitoring server is a SURGARDTM SG-System III Virtual, or similar system.
- each monitoring server acts as a controller for each monitoring server, and is in communication with, and controls overall operation, of each server.
- the processor may include, or be in communication with, the memory that stores processor executable instructions controlling the overall operation of the monitoring server.
- Suitable software enable each monitoring server to receive alarms and cause appropriate actions to occur.
- Software may include a suitable Internet protocol (IP) stack and applications/clients.
- IP Internet protocol
- Each monitoring server of the central monitoring station may be associated with an IP address and port(s) by which it communicates with the control panels and/or the user devices to handle alarm events, etc.
- the monitoring server address may be static, and thus always identify a particular one of monitoring server to the intrusion detection panels.
- dynamic addresses could be used, and associated with static domain names, resolved through a domain name service.
- the network interface card interfaces with the network to receive incoming signals, and may for example take the form of an Ethernet network interface card (NIC).
- NIC Ethernet network interface card
- the servers may be computers, thin-clients, or the like, to which received data representative of an alarm event is passed for handling by human operators.
- the monitoring station may further include, or have access to, a subscriber database that includes a database under control of a database engine.
- the database may contain entries corresponding to the various subscriber devices/processes to panels like the panel that are serviced by the monitoring station.
- a computer program product i.e., a computer program tangibly embodied in one or more tangible, physical hardware storage devices that are computer and/or machine-readable storage devices for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
- a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
- Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
- special purpose logic circuitry e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read-only storage area or a random access storage area or both.
- Elements of a computer include one or more processors for executing instructions and one or more storage area devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- Tangible, physical hardware storage devices that are suitable for embodying computer program instructions and data include all forms of non-volatile storage, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks and volatile computer memory, e.g., RAM such as static and dynamic RAM, as well as erasable memory, e.g., flash memory.
- semiconductor storage area devices e.g., EPROM, EEPROM, and flash storage area devices
- magnetic disks e.g., internal hard disks or removable disks
- magneto-optical disks e.g., magneto-optical disks
- CD-ROM and DVD-ROM disks e.g., RAM such as static and dynamic RAM, as well as erasable memory, e.g., flash memory.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- General Engineering & Computer Science (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Artificial Intelligence (AREA)
- Algebra (AREA)
- Probability & Statistics with Applications (AREA)
- Alarm Systems (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 15/173,795, filed on Jun. 6, 2016, the entirety of which is incorporated by reference herein.
- This description relates to operation of sensor networks such as those used for security, intrusion and alarm systems installed on industrial or commercial or residential premises.
- It is common for businesses to have various types of systems such as intrusion detection, fire detection and surveillance systems for detecting various alarm conditions at their premises and signaling the conditions to a monitoring station or authorized users. These systems use various types of sensors such as motion detectors, cameras, and proximity sensors, thermal, optical, vibration sensors and so forth. Often such sensors, especially digital video recorders, typically record sensor data, e.g., video either continually or begin recording based on a trigger event. For many purposes, the video is typically manually reviewed to determine events leading up to and immediately following a specific occurrence.
- Described are techniques that capture sensor data, especially video data, from corresponding sensors, just prior to and following an identified incident. The techniques automatically evaluate such data by analyzing such data in conjunction with additional data feeds from other sensors and other external data sources for various purposes. One such purpose is for a tool to automatically initiate an insurance claim that can be validated and paid.
- Described herein is a system that mines accumulated data and geographically-related data from systems deployed in a premises, and which produces predictions with respect to a risk level that either equipment or a user's actions relative to the equipment pose to the premises and/or the equipment and that sends appropriate control signals to sensors for enhanced monitoring of the premises.
- According to an aspect, a computer program product tangibly stored on a computer readable hardware storage device for controlling operation of sensors at a physical premises includes instructions to cause a processor to receive a message corresponding to a prediction of an impending insurable event at the physical premises, process the received message according to an algorithm that is selected in accordance with the predicted insurable event, the algorithm producing one or more commands to modify operation of one or more specific sensors of a plurality of sensor devices that collect sensor information at the physical premises, send the commands that modify the operation of the one or more sensor devices at the physical premises at a period of time prior to a likely occurrence of the predicted insurable event, collect sensor information from the plurality of sensor devices deployed at the premises, and store the sensor information in a remote persistent storage system.
- Aspects also include systems and methods.
- Additional features of the computer program product, systems and methods may include to these and other features.
- The message includes a calculated indication produced by instructions to continually analyze the collected sensor information by one or more unsupervised learning models to determine normal sets of states and drift states for the premises to produce the prediction of an occurrence of the insurable event. The algorithm to process the received message includes instructions to determine modifications of the operation of the one or more specific sensor devices at the identified premises according to an occurrence of a drift state and produce the messages including the commands that modify the operation of the one or more specific sensor devices from the determined modifications that are based on the drift state. The instructions to determine modifications further comprise instructions to analyze the prediction of the event, determine sensor devices that are in proximity to a location of the predicted event, determine modifications to sensor devices in proximity to the location, which modifications are based on the predicted event, specific locations of the sensor devices and specific types of the sensor devices, determine the commands based on the determined modifications and send the commands to the one or more specific sensor devices. The message is received from an external service. The computer program product also includes instructions to generate an insurance claim form by automatically populating a template insurance claim form with information required by the template insurance claim form.
- The computer program product further includes instructions to detect an actual occurrence of the insurable event based on actual sensor data received from the plurality of sensor devices to provide a trigger to generate an insurance claim form and generate an insurance claim form subsequent to the actual occurrence of the event by automatically populating a template insurance claim form with information required by the template insurance claim form. The computer program product further includes instructions to retrieve from a database, operational data for specified equipment that are insured by the insurance carrier, the operational data comprising service records, raw sensor data, and/or alerts generated for the specified equipment and augment the insurance claim form with a report that includes the operational data for the specified equipment at a time period prior to the event. The algorithm being of a weather-related event, and the computer program product further includes instructions to receive the indication from an external service, the indication being of the weather-related event, parse the received indication to produce a representation of the indication that identifies a type of weather-related event, analyze the parsed indication according to the location of the physical premises to produce a likely prediction of damage to the physical premises, and produce the commands to modify the operation of the one or more specific sensors of the plurality of sensors, according to the likely prediction of damage to the physical premises. The computer program product further includes instructions to produce a request to upload service and usage data for one or more monitored units within the premises to an external database and send to the system at the physical premises the request to upload to the external database, the service and usage data.
- The one or more specific sensors include one or more video cameras, and the algorithm, comprises instructions to receive current positioning information for each of the one or more video cameras, calculate based at least in part on the received indication repositioning information for the one or more video cameras, and send the repositioning information to at least some of the one or more video cameras to modify operation of the one or more video cameras by repositioning the at least some of the one or more video cameras. The one or more specific sensors include one or more video cameras, and the algorithm, comprises instructions to receive current frame rate information for each of the one or more video cameras, frame rate information being information of the frequency at which images are taken and sent by the one or more video cameras, calculate based at least in part on the received indication modified frame rate information for the one or more video cameras and send the modified frame rate information to at least some of the one or more video cameras to modify the frame rate operation of the one or more video cameras.
- The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention is apparent from the description and drawings, and from the claims.
-
FIG. 1 is a schematic diagram of an exemplary networked security system. -
FIG. 2 is a block diagram of a sensor. -
FIG. 3 is a block diagram of a sensor based state prediction system. -
FIG. 3A is a diagram of a logical view of the sensor based state prediction system ofFIG. 3 . -
FIG. 4 is a flow diagram of a state representation engine. -
FIG. 5 is a flow diagram of sensor based state prediction system processing. -
FIG. 5A is a flow diagram of training process for a Next state predictor engine that is part of the sensor based state prediction system. -
FIG. 5B is a flow diagram of a Next state predictor engine model building process. -
FIG. 6 is a flow diagram of operation processing by the sensor based state prediction system. -
FIG. 7 is a flow diagram of an example of sensor based risk profiling. -
FIG. 8 is a block diagram of a system architecture. -
FIG. 9 is a flow diagram of overview of indication based sensor control. -
FIG. 10 is a flow diagram of an example of indication based sensor control. -
FIG. 11 is a flow diagram of an example of sensor based augmented claim filing. -
FIG. 11A is a block diagram of an exemplary format of supplemental data to augment a claim form. - Described herein are surveillance/intrusion/fire/access systems that are wirelessly connected to a variety of sensors. In some instances those systems may be wired to sensors. Examples of detectors/sensors 28 (sensor detectors used interchangeably) include motion detectors, glass break detectors, noxious gas sensors, smoke/fire detectors, contact/proximity switches, video sensors, such as camera, audio sensors such as microphones, directional microphones, temperature sensors such as infrared sensors, vibration sensors, air movement/pressure sensors, chemical/electro-chemical sensors, e.g., VOC (volatile organic compound) detectors. In some instances, those systems sensors may include weight sensors, LIDAR (technology that measures distance by illuminating a target with a laser and analyzing the reflected light), GPS (global positioning system) receivers, optical, biometric sensors, e.g., retina scan sensors, EGG/Heartbeat sensors in wearable computing garments, network hotspots and other network devices, and others.
- The surveillance/intrusion/fire/access systems employ wireless sensor networks and wireless devices, with remote, cloud-based server monitoring and report generation. As described in more detail below, the wireless sensor networks wireless links between sensors and servers, with the wireless links usually used for the lowest level connections (e.g., sensor node device to hub/gateway).
- In the network, the edge (wirelessly-connected) tier of the network is comprised sensor devices that provide specific sensor functions. These sensor devices have a processor and memory, and may be battery operated and include a wireless network card. The edge devices generally form a single wireless network in which each end-node communicates directly with its parent node in a hub-and-spoke-style architecture. The parent node may be, e.g., a network access point (not to be confused with an access control device or system) on a gateway or a sub-coordinator which is, in turn is connected to the access point or another sub-coordinator.
- Referring now to
FIG. 1 , an exemplary (global) distributed network topology for awireless sensor network 10 is shown. InFIG. 1 thewireless sensor network 10 is a distributed network that is logically divided into a set of tiers or hierarchical levels 12 a-12 c. In an upper tier orhierarchical level 12 a of the network are disposed servers and/orvirtual servers 14 running a “cloud computing” paradigm that are networked together using well-established networking technology such as Internet protocols or which can be private networks that use none or part of the Internet. Applications that run on thoseservers 14 communicate using various protocols such as for Web Internet networks XML/SOAP, RESTful web service, and other application layer technologies such as HTTP and ATOM. The distributednetwork 10 has direct links between devices (nodes) as shown and discussed below. - In one implementation
hierarchical level 12 a includes acentral monitoring station 49 comprised of one or more of theserver computers 14 and which includes or receives information from a sensor basedstate prediction system 50 as will be described below. - The distributed
network 10 includes a second logically divided tier orhierarchical level 12 b, referred to here as a middle tier that involvesgateways 16 located at central, convenient places inside individual buildings and structures. Thesegateways 16 communicate withservers 14 in the upper tier whether the servers are stand-alone dedicated servers and/or cloud based servers running cloud applications using web programming techniques. Themiddle tier gateways 16 are also shown with bothlocal area network 17 a (e.g., Ethernet or 802.11) and cellular network interfaces 17 b. - The distributed network topology also includes a lower tier (edge layer) 12 c set of devices that involve fully-functional sensor nodes 18 (e.g., sensor nodes that include wireless devices, e.g., transceivers or at least transmitters, which in
FIG. 1 are marked in with an “F”), as well as wireless sensor nodes or sensor end-nodes 20 (marked in theFIG. 1 with “C”). In some embodiments wired sensors (not shown) can be included in aspects of the distributednetwork 10. - In a typical network, the edge (wirelessly-connected) tier of the network is largely comprised of devices with specific functions. These devices have a small-to-moderate amount of processing power and memory, and often are battery powered, thus requiring that they conserve energy by spending much of their time in sleep mode. A typical model is one where the edge devices generally form a single wireless network in which each end-node communicates directly with its parent node in a hub-and-spoke-style architecture. The parent node may be, e.g., an access point on a gateway or a sub-coordinator which is, in turn, connected to the access point or another sub-coordinator.
- Each gateway is equipped with an access point (fully functional sensor node or “F” sensor node) that is physically attached to that access point and that provides a wireless connection point to other nodes in the wireless network. The links (illustrated by lines not numbered) shown in
FIG. 1 represent direct (single-hop MAC layer) connections between devices. A formal networking layer (that functions in each of the three tiers shown inFIG. 1 ) uses a series of these direct links together with routing devices to send messages (fragmented or non-fragmented) from one device to another over the network. - In some instances the
sensors 20 are sensor packs (not shown) that are configured for a particular types of business applications, whereas in other implementations the sensors are found in installed systems such as the example security systems discussed below. - Referring to
FIG. 2 , asensor device 20 is shown.Sensor device 20 includes aprocessor device 21 a, e.g., a CPU and or other type of controller device that executes under an operating system, generally with 8-bit or 16-bit logic, rather than the 32 and 64-bit logic used by high-end computers and microprocessors. Thedevice 20 has a relatively small flash/persistent store 21 b andvolatile memory 21 c in comparison with other the computing devices on the network. Generally thepersistent store 21 b is about a megabyte of storage or less andvolatile memory 21 c is about several kilobytes of RAM memory or less. Thedevice 20 has anetwork interface card 21 d that interfaces thedevice 20 to thenetwork 10. Typically a wireless interface card is used, but in some instances a wired interface could be used. Alternatively, a transceiver chip driven by a wireless network protocol stack (e.g., 802.15.4/6LoWPAN) can be used as the (wireless) network interface. These components are coupled together via a bus structure. Thedevice 20 also includes asensor element 22 and a sensor interface 22 a that interfaces to theprocessor 21 a.Sensor 22 can be any type of sensor types mentioned above. - Also shown in
FIG. 2 is a panel 38. Panel 38 may be part of an intrusion detection system (not shown). The panel 38, i.e., intrusion detection panel is coupled to plural sensors/detectors 20 (FIG. 1 ) disbursed throughout the physical premises. The intrusion detection system is typically in communication with a central monitoring station (also referred to as central monitoring center not shown) via one or more data or communication networks (not shown). Sensor/detectors may be hard wired or communicate with the panel 38 wirelessly. In general, detectors sense glass breakage, motion, gas leaks, fire, and/or breach of an entry point, and send the sensed information to the panel 38. Based on the information received from thedetectors 20, the panel 38, e.g., intrusion detection panel determines whether to trigger alarms and/or sending alarm messages to themonitoring station 20. A user may access the intrusion detection panel to control the intrusion detection system, e.g., disarm, arm, enter predetermined settings, etc. Other systems can also be deployed such as access control systems, etc. - Referring now to
FIG. 3 , a sensor basedstate prediction system 50 is shown. Theprediction system 50 executes on one or more of the cloud-based server computers and accesses database(s) 51 that store sensor data and store state data in a state transition matrix. In some implementations, dedicated server computers could be used as an alternative. - The sensor based
state prediction system 50 includes aState Representation Engine 52. TheState Representation Engine 52 executes on one or more of the servers described above and interfaces on the servers receive sensor signals from a large plurality of sensors deployed in various premises throughout an area. These sensor signals have sensor values and together with other monitoring data represent a data instance for a particular area of a particular premises in a single point in time. The data represent granular information collected continuously from the particular premises. The State Representation Engine takes these granular values and converts the values into a semantic representation. For example, a set of sensor values and monitoring data for particular time duration are assigned a label, e.g., “State-1.” As the data is collected continuously, thisEngine 52 works in an unsupervised manner, as discussed below, to determine various states that may exist in the premises. - As the different states are captured, this
Engine 52 also determines state transition metrics that are stored in the form a state transition matrix. A simple state transition matrix has all the states in its rows and columns, with cell entries being many times did the premises move from a state in cell i to a state in cell j are over a period of time and/or events. This matrix captures the operating behavior of the system. State transitions can happen either over time or due to events. Hence, the state transition metrics are captured using both time and events. A state is a representation of a group of sensors grouped according to a clustering algorithm. - The State transition matrix is a data structure that stores how many times the environment changed from State_i to State_j. The State transition matrix thus stores “knowledge” that the sensor based
state prediction system 50 captures and which is used to determine predictions of the behavior of the premises. The State transition matrix is accessed by the Next prediction engine to make decisions and trigger actions by the sensor basedstate prediction system 50. - Unsupervised learning e.g., clustering is used to group sensor readings into states and conditions over a period of time that form a time trigger state and over events to form an event trigger state. Used to populate the state transition matrix per premises.
- An exemplary simplified depiction for explanatory purposes of a State transition matrix is set out below:
-
State State State State State State tran- tran- tran- tran- tran- tran- sition sition sition sition sition sition Instance x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y x, y - Where columns in the State transition matrix is are “state transitions” expressed as a listing by instance with pointer to the state time and event trigger tables.
- Entries x,y in cells of the State transition matrix are pointers that corresponds to the trigger tables that store the number of time periods and events respectively for each particular cell of the State transition matrix.
- The State time trigger is depicted below. The State time trigger tracks the time periods t1 . . . t8 for each state transition corresponding to the number x in each particular cell.
-
t1 t2 t3 State State State *** transition 1transition 2 transition 3 *** Instance 1 1 1 *** 1 1 1 *** t1 t5 t2 t3 t4 t7 t8 *** - State event trigger tracks the event E1 . . . E2 for each state transition corresponding to the number y in each particular cell (if any).
-
e1 e2 e3 State State State *** transition 1transition 2 transition 3 *** Instance E2 *** E2 *** E1 E1 E3 *** - The
State Representation Engine 52 in addition to populating the State transition matrix, also populates a State time trigger that is a data structure to store, the time value spent in each state and a distribution of the time duration for each state. Similar to the State transition matrix, the State time trigger also encapsulates the behavior knowledge of the environment. State transitions can be triggered using these values. - The
State Representation Engine 52 also populates a State event trigger. The State event trigger is a data structure to store, event information. An example of an event can be sensor on a door sensing that a door was opened. There are many other types of events. This data structure captures how many times such captured events caused a state transition. - The
State Representation Engine 52 populates the State Transition matrix and the State Time and State triggers, which together capture metrics, which provide a Knowledge Layer of the operational characteristics of the premises. - The sensor based
state prediction system 50 also includes a NextState Prediction Engine 54. The NextState Prediction Engine 54 predicts an immediate Next state of the premises based the state transition matrix. The NextState Prediction Engine 54 predicts if the premises will be in either a safe state or a drift state over a time period in the future. The term “future” as used herein refers to a defined window of time in the future, which is defined so that a response team has sufficient time to address a condition that is predicted by the NextState Prediction Engine 54 that may occur in the premises to restore the state of the premises to a normal state. The Next State Prediction Engine operates as a Decision Layer in the sensor. - The sensor based
state prediction system 50 also includes a State Representation graphical user interface generator 56. State Representation graphical user interface generator 56 provides a graphical user interface that is used by the response team to continuously monitor the state of the premises. The State Representation graphical user interface generator 56 receives data from the NextState Prediction Engine 54 to graphically display whether the premises is either in the safe state or the drifting state. The State Representation graphical user interface generator 56 operates as an Action Layer, where an action is performed based on input from Knowledge and Decision Layers. - The sensor based
state prediction system 50 applies unsupervised algorithm learning models to analyze historical and current sensor data records from one or more customer premises and generates a model that can predict Next patterns, anomalies, conditions and events over a time frame that can be expected for a customer site. The sensor basedstate prediction system 50 produces a list of one or more predictions that may result in on or more alerts being sent to one more user devices as well as other computing system, as will be described. Theprediction system 50 uses various types of unsupervised machine learning models including Linear/Non-Linear Models, Ensemble methods etc. - Referring now to
FIG. 3A , alogical view 50′ of the sensor basedstate prediction system 50 is shown. In this view at the bottom is the raw events layer that is the sensors values and monitoring data from the environment under surveillance. The middle layer is an abstraction layer that abstracts these raw events as state (represented inFIG. 5A by the blocks “States” (State Representation Engine 52), STM (State Transition Matrix), STT (State Time Trigger) and SET (State Event Trigger) that produce a state as a concise semantic representation of the underlying behavior information of the environment described by time and various sensor values at that point in time. With the upper blocks being a Decisions block (Next State Prediction Engine 54) and Actions block (State Representation graphical user interface generator 56.) - Referring now to
FIG. 4 , theprocessing 60 for theState Representation Engine 52 is shown. The State Representation Engine 55 collects 62 (e.g., from thedatabases 51 or directly from interfaces on the servers) received sensor signals from a large plurality of sensors deployed in various premises throughout an area that is being monitored by the sensor basedstate prediction system 50. The sensor data collected from the premises, includes collected sensor values and monitoring data values. - An example of the sensor values is shown below (using fictitious data):
- Site no.: 448192
- Kitchen thermostat: 69,
- Stove thermostat: 72,
- Outdoor security panel: Active,
- Kitchen Lights: On,
- Delivery Door: Shutdown
- As these sensor signals have sensor values that represent a data instance for a particular area of a particular premises in a single point in time, the
State Representation Engine 52 converts 64 this sensor data into semantic representations of the state of the premises at instances in time. TheState Representation Engine 52 uses 66 the converted sensor semantic representation of the sensor data collected from the premises to determine the empirical characteristics of the premises. TheState Representation Engine 52 assigns 67 an identifier to the state. - For example, the kitchen in a restaurant example for a premises identified in the system as “Site no.: 448192” uses the sensor values to produce a first state that is identified here as “
State 1.” Any labelling can be used and is typically consecutive identified and this state is semantically described as follows: -
- State 1: Kitchen thermostat: 69, Stove thermostat: 72, Outdoor security panel: Active, Kitchen Lights: On, Delivery Door: Shutdown, current time: Monday 5:00 AM PST, start time: Sunday 10:00 PM PST
- The semantic description includes the identifier “
State 1” as well as semantic descriptions of the various sensors, their values and dates and times. - The
State Representation Engine 52 determines an abstraction of a collection of “events” i.e., the sensor signals as state. The state thus is a concise representation of the underlying behavior information of the premises being monitored, described by time and data and various sensor values at that point in time and at that date. - The semantic representation of the state is stored 68 by the
State Representation Engine 52 as state transition metrics in the State Representation matrix. Over time and days, as the sensors produce different sensor values, the State Representation Engine 55 determines different states and converts these states into semantic representations that are stored the state transition metrics in the matrix, e.g., as in acontinuous loop 70. - The kitchen example is further set out below:
- The
State Representation Engine 52 collects the following data (fictitious data) from these three sensors at a particular points in time, -
Obstruction Room Stove Detector Thermostat Thermostat 0 71.1755732 78.95655605 0 68.27180645 79.97821825 0 71.80483918 79.428149 0 70.46354628 81.90901291 0 69.83508114 81.12026772 0 71.46074066 81.613552 1 70.14174204 80.12242015 1 70.98180652 78.03049081 - The
state representation engine 52, converts these raw values into state definitions and assigns (labels) each with a unique identifier for each state, as discussed above. As the premises is operated over a period of time, the Next transition matrix, the state time trigger matrix and the state event trigger matrix are filled. - Continuing with the concrete example, the
state representation engine 52 produces the following two states (State 1 is repeated here for clarity in explanation). - State 1: Kitchen thermostat: 69, Stove thermostat: 72, Outdoor security panel: Active, Kitchen Lights: On, Delivery Door: Shutdown, current time: Sunday 10:00 PM.
- State 2: Kitchen thermostat: 69, Stove thermostat: 80, Outdoor security panel: Active, Kitchen Lights: On, Delivery Door: Shutdown, current time: Sunday 10:15 PM
- State 3: Kitchen thermostat: 69, Stove thermostat: 60, Outdoor security panel: Active, Kitchen Lights: On, Delivery Door: Shutdown, current time: Monday 1:00 AM.
- Between
State 1 and State 2 there is a transition in which over a 15 minute span the Stove thermostat value increased from 72 to 80 and from State 2 to State 3 the Stove thermostat value decreased from 80 to 72 over a 2 hr. and 45 min. period, which can likely be attributed to something being cooked betweenState 1 and State 2 and by State 3 the order was filled, item removed from stove and the stove thermostat shows a lower value. - The
state representation engine 52, adds to the state transition matrix an entry that corresponds to this transition, that the premises moved fromstate 1 to state 2. Thestate representation engine 52, also adds to the state transition matrix in that entry, an indicator that the transition was “time trigger,” causing the movement, and thus thestate representation engine 52 adds an entry in state time trigger matrix. Thestate representation engine 52, thus co-ordinates various activities inside the premises under monitoring and captures/determines various operating characteristics of the premises. - Referring now to
FIG. 5 processing 80 for the NextState Prediction Engine 54 is shown. Thisprocessing 80 includestraining processing 80 a (FIG. 5A ) and model building processing 80 b (FIG. 5B ), which are used in operation of the sensor basedstate prediction system 50. - Referring now to
FIG. 5A , thetraining processing 80 a that is part of theprocessing 80 for the NextState Prediction Engine 54 is shown. InFIG. 5A ,training processing 80′ trains the NextState Prediction Engine 54. The NextState Prediction Engine 54 accesses 82 the state transition matrix and retrieves a set of states from the state transition matrix. From the retrieved set of states the NextState Prediction Engine 54 generates 84 a list of most probable state transitions for a given time period, the time period can be measured in minutes, hours, days, weeks, months, etc. For example, consider the time period as a day. After a certain time period of active usage, the sensor basedstate prediction system 50, through thestate representation engine 52, has acquired knowledge states s1 to s5. - From the state transition matrix the system uses the so called “Markov property” to generate state transitions. As known, the phrase “Markov property” is used in probability and statistics and refers to the “memoryless” property of a stochastic process.
- From the state transition matrix using the so called “Markov property” the system generates state transition sequences, as the most probable state sequences for a given day.
- An exemplary sequence uses the above fictitious examples is shown below:
-
- s1 s2 s4 s5
- s2 s2 s4 s5
- The Next
State Prediction Engine 54 determines 86 if a current sequence is different than an observed sequence in the list above. When there is a difference, the NextState Prediction Engine 54 determines 88 whether something unusual has happened in the premises being monitored or whether the state sequence is a normal condition of the premises being monitored. - With this information the Next
State Prediction Engine 54 90 these state transitions as “safe” or “drift state” transitions. Either the NextState Prediction Engine 54 or manual intervention is used to label either at the state transition level or the underlying sensor value levels (fictitious) for those state transitions producing the follow: -
Obstruction Room Stove Safety State Detector Thermostat Thermostat (label) 0 71.1755732 78.95655605 G 0 68.27180645 79.97821825 G 0 71.80483918 79.428149 G 0 70.46354628 81.90901291 G 0 69.83508114 81.12026772 G 0 71.46074066 81.613552 G 1 70.14174204 80.12242015 G 1 70.98180652 78.03049081 G 0 68.58285177 79.981358 G 0 69.91571802 79.4885171 G 1 69.89799953 79.3838372 G 0 70.42668373 80.20397118 G 1 70.23391637 81.80212485 Y 0 68.19244768 81.19203004 G - The last column in the above table is the label, wherein in this example “G” is used to indicate green, e.g., a normal operating state, e.g., “a safe state” and “Y” is used to indicate yellow, e.g., an abnormal or drift state, e.g., an “unsafe state” and “R” (not shown above) would be used to represent red or a known unsafe state. This data and states can be stored in the
database 51 and serves as training data for a machine learning model that is part of the NextState Recommendation Engine 54. - Referring now to
FIG. 5B , the model building processing 80 b of the NextState Recommendation Engine 54 is shown. The model building processing 80 b uses the above training data to build a model that classify a system's state into either a safe state or an unsafe state. Other states can be classified. For example, three states can be defined, as above, “G Y R states” or green (safe state) yellow (drifting state) and red (unsafe state). For ease of explanation two states “safe” (also referred to as normal) and “unsafe” (also referred to as drift) are used. The model building processing 80 b accesses 102 the training data and applies 104 one or more machine learning algorithms to the training data to produce the model that will execute in the NextState Recommendation Engine 54 during monitoring of systems. Machine learning algorithms such as Linear models and Non-Linear Models, Decision tree learning, etc., which are supplemented with Ensemble methods (where two or more models votes are tabulated to form a prediction) and so forth can be used. From this training data and the algorithms, the model is constructed 106. - Below is table representation of a fictitious Decision Tree using the above fictitious data (again where “G” is used to indicate green, “a safe state” e.g., a normal operating state, and “Y” is used to indicate yellow, e.g., drifting state, and “R” (shown below) to represent red or a known unsafe state. This data and states can be stored in the
database 51 and serves as training data for a machine learning model that is part of the NextState Recommendation Engine 54. - stoveThermoStat=‘(−inf-81.064396]’
- |obstructionDetector=0: G
- |obstructionDetector=1: G
- stoveThermoStat=‘(81.064396-84.098301]’
- |obstructionDetector=0. G
- |obstructionDetector=1: Y
- stove ThermoStat=‘(84.098301-87.132207]’: R
- stoveThermoStat=‘(87.132207-90.166112]’
- |obstructionDetector=0: R
- |obstructionDetector=1: R
- stoveThermoStat=‘(90.166112-inf)’
- |obstructionDetector=0: R
- |obstructionDetector=1: R
- Empirical characteristics can be a model based and human based are determined 106 for various states of the premises in terms of, e.g., safety of the occupants and operational conditions of the various systems within the premises. Examples of such systems include intrusion detection systems, fire alarm systems, public annunciation systems, burglar alarm systems, the sensors deployed at the premises, as well as other types of equipment, such as refrigeration equipment, stoves, and ovens that may be employed in the kitchen example that will be discussed below. Other instances of particular premises will have other types of systems that are monitored. Based on the empirical determined states of the various systems within the premises being monitored, the sensor based
state prediction system 50 will determine the overall state of the premises as well as individual states of the various systems within the premises being monitored, as will be discussed below. - Referring now to
FIG. 6 ,operational processing 100 of the sensor basedstate prediction system 50 is shown. The sensor basedprediction system 50 receives 102 (by the State Representation Engine 52) sensor signals from a large plurality of sensors deployed in various premises throughout an area being monitored. TheState Representation Engine 52 converts 104 the sensor values from these sensor signals into a semantic representation that is identified, as discussed above. As the data is collected continuously, thisEngine 52 works in an unsupervised manner to determine various states that may exist in sensor data being received from the premises. As the different states are captured, theState Representation Engine 52 also determines 106 state transition metrics that are stored in the state transition matrix using both time and events populating the State time trigger and the State event trigger, as discussed above. The State transition matrix is accessed by theNext prediction engine 54 to make decisions and trigger actions by the sensor basedstate prediction system 50. - The Next
State Prediction Engine 54 receives the various states (either from the database and/or from theState Representation Engine 52 andforms 108 predictions of an immediate Next state of the premises/systems based the state data stored in the state transition matrix. For such states the NextState Prediction Engine 54 predicts if the premises will be in either a safe state or a drift state over a time period in the Next as discussed above. - The sensor based
state prediction system 50 also sends 110 the predictions to the State Representation engine 56 that generates a graphical user interface to provide a graphical user interface representation of predictions and states of various premises/systems. The state is tagged 112 and stored 114 in the state transition matrix. - The sensor based
state prediction system 50 using theState Representation Engine 52 that operates in a continuous loop to generate new states and the NextState Prediction Engine 54 that produces predictions together continually monitor the premises/systems looking for transition instances that result in drift in states that indicate potential problem conditions. As the sensors in the premises being monitored operate over a period of time, the state transition matrix, the state time trigger matrix and the state event trigger matrix are filled by thestate representation engine 52 and the NextState Prediction Engine 54processing 80 improves on predictions. - The sensor based
state prediction system 50 thus determines the overall state of the premises and the systems by classifying the premises and these systems into a normal or “safe” state and the drift or unsafe state. Over a period of time, the sensor basedstate prediction system 50 collects information about the premises and the sensor basedstate prediction system 50 uses this information to construct a mathematical model that includes a state representation, state transitions and state triggers. The state triggers can be time based triggers and event based triggers, as shown in the data structures above. - Referring now to
FIG. 7 , processing 120 of sensor information using the architecture above is shown. The sensor-basedstate prediction system 50 receives 122 sensor data from sensors monitoring each physical object or physical quantity from the sensors (FIG. 2 ) deployed in a premises. The sensor-basedstate prediction system 50 is configured 124 with an identity of the premises and the physical objects being monitored by the sensors in the identified premises. The sensor basedstate machine 50processes 126 the received sensor data to produce states as set out above using the unsupervised learning models. Using these models the sensor-basedstate prediction system 50 monitors various physical elements to detect drift states. - For example, one of the sensors can be a vibration sensor that sends the sensor-based state prediction system 50 a signal indicating a level of detected vibration from the vibration sensor. This signal indicates both magnitude and frequency of vibration. The sensor-based
state prediction system 50 determines over time normal operational levels for that sensor based on what system that sensor is monitoring and together with other sensors produces 128 series of states for the object and/or premises. These states are associated 130 with either a state status of “safe” or “unsafe” (also referred to herein as “normal” or “drift,” respectively). Part of this process of associating is provided by the learning process and this associating can be empirically determined based on human input. This processing thus develops more than a mere envelope or range of normal vibration amplitude and vibration frequency indications for normal operation for that particular vibration sensor, but rather produces a complex indication of a premises or object state status by combining these indications for that sensor with other indications from other sensors to produce the state transition sequences mentioned above. - States are produced from the unsupervised learning algorithms (discussed above in
FIGS. 5-5B ) based on that vibration sensor and states from other sensors, which are monitoring that object/premises. The unsupervised learning algorithms continually analyze that collected vibration data and producing state sequences and analyze state sequences that include that sensor. Overtime, as the analysis determines 134 that states including that sensor have entered into a drift state that corresponds to an unsafe condition, the sensor-basedstate prediction system 50 determines 136 a suitable action alert (in the Action layer) to indicate to a user that there may be something wrong with the physical object being monitored by that sensor. The analysis provided by the prediction system sends the alert to indicate that there is something going wrong with object being monitored. The sensor-basedstate prediction system 50 produces suggestedactions 138 that the premises' owner should be taking with respect to the object being monitored. - Referring now to
FIG. 8 , anarchitecture 139 that combines the sensor-based state prediction system 50 (FIG. 5 ) in a cooperative relationship with business application servers 139 a in the cloud is shown. InFIG. 8 , the sensor-basedstate prediction system 50 receives sensor data from the sensor network 11 (or storage 51) for a particular premises, processes that data to produce states and state sequences, and uses this information in conjunction with event indications that can be calculated, premises event driven, and/or external fromsystem 125 as well as business application servers to process augmented claims submissions for insured events under an insurance policy using an augmentedclaim filing module 220. - Referring now to
FIG. 9 , anevent prediction module 140 is shown. The event prediction module 140 (which can be part of the sensor-basedstate prediction system 50 or a separate computer system) receives 142 external inputs/notifications and inputs from the sensor-basedstate prediction system 50. Theevent prediction module 140 analyzes 144 these external inputs and predictions from the sensor-basedstate prediction system 50. Based on the received external input/notification and inputs from the state-based prediction engine, the event prediction module selects one or more algorithms for processing of data. Theevent prediction module 140 produces 146 control signals to control specific sensors in the premises to modify the manner in which the specific sensors sense conditions external to and within the premises. - Referring now to
FIG. 10 , anaugmented claim process 160 for supporting an insurance claim upon an occurrence of an insurable event at a physical premises is shown. Thisprocess 160 can be executed on a computer system (not shown) or by the sensor-basedstate prediction system 50. Theprocess 160 supports an insurance claim filing upon an occurrence of an insurable event at a physical premises. - The
process 160 receives 162 an indication of an impending insurable event that may affect the physical premises. The received indication is processed 164 according to an algorithm that is selected in accordance with the insurable event. The algorithm produces 166 one or more messages that are sent 168 to various sensors (determined from the algorithm as discussed below). These messages in the form of data packets including address information and a payload are parsed 170 at the sensors and providecommands 172 that modify operation of one or more specific sensors of the sensors of, e.g.,FIG. 1 . - These commands cause the specific sensors to collect sensor information in a different manner that prior to execution of the command by the sensor. For example, messages can include the following format:
- Command <address of device> <command> <data>
- Examples for a video camera
- Reposition command: <IP address of camera> <reposition> <position data>
- Frame rate command: <IP address of camera> <rate> <rate data>
- Other commands can include turning on a sensor that is normally in a sleep mode or requesting a reading from a sensor that is in a periodic schedule mode, where a reading from the sensor is required more frequently than scheduled. Many other commands can be provided.
- The process sends the commands that modify the operation of one or more sensors at the physical premises at a period of time prior to a likely occurrence of the insurable event.
- The process can receive an indication that is a calculated indication produced by the sensor-based
state prediction system 50. In this scenario, the sensor-basedstate prediction system 50 collects sensor information from the plurality of sensors deployed at the premises and continually analyzes the collected sensor information by one or more unsupervised learning models to determine normal sets of states and drift states for the premises. Upon detection of a prediction of a likely occurrence of one or more drift states, as correlated to normal operation at the premises, this calculated indication of the occurrence of a drift state is used to determine which of the sensors will have a modified operation and what the modified operation will be. - For example, returning to the example of the kitchen, in the event that stove and room temperature sensors sense conditions that cause a drift state “State N” below, the computer can cause other sensors in the vicinity of the Stove in the Kitchen to wake up, and possible reposition video cameras to that area.
- State N: Kitchen thermostat: 90, Stove thermostat: 120, Outdoor security panel: Active, Kitchen Lights: On, Delivery Door: Shutdown, current time: Monday 2:15 AM
- In order to accomplish this the
process 160 will access a stored computer representation (e.g., as a graph or the like) that provides a mapping of all sensors in a premises such that the computer determines what sensors to modify operation of and what the modifications would be. - Another example, would be an intrusion detection. The sensor-based
state prediction system 50 produces a drift state - State Y: Room thermostat: 68, Outdoor security panel: Active, Lights: Off, Delivery Door: Opened, current time: Monday 2:15 AM
- Upon detection of an intrusion into the premises, video cameras can be repositioned by the computer producing corresponding commands.
- Another example is where the indication is a premises-related event. Such algorithms would include instructions to cause the computer to analyze the indication according to sensor data received from sensors at the physical premises to produce a likely prediction of the event at the physical premises, produce the commands to modify the operation of the one or more specific sensors of the plurality of sensors according to the likely prediction of the event at the physical premises, and send the commands to the one or more specific sensors of the plurality of sensors.
- Another example is where the indication is a received indication from an external service or source such as a weather service. The indication is an input to the computer and/or the sensor-based
state prediction system 50. The computer and/or the sensor-based state prediction system collects sensor information from the plurality of sensors deployed at the premises and analyzes the indication received from the external service by applying one or more unsupervised learning models to predict one or more drift states that result from collected sensor information, normal sets of states for the premises, and the received indication. The sensor-basedstate prediction system 50 correlates the prediction of the occurrence of a drift state to determine which of the sensors at the premises to modify the operation of. The sensor-basedstate prediction system 50 produces the commands to modify the operation of specific sensors according to the indication, the drift state and the representation of the premises. - The computer system can generate an insurance claim form subsequent to occurrence of the event by automatically populating a template insurance claim form with information required of the insurance claim form according to the template. Upon detect from the analysis by the drift states of an actual occurrence of one or more of the predicted drift states, actual sensor data received from the plurality of sensors, and/or an external notification any of these can trigger generation of an insurance claim.
- The computer system retrieves from a database, operational data for specified equipment that are insured by the insurance carrier, the operational data comprising service records, raw sensor data, and alerts generated for the specified equipment and augments the insurance claim form with a report that includes the operational data for the specified equipment at a time period prior to the event.
- For example, the computer executes an algorithm that processes a weather-related event, when the indication is a received indication from an external service that indicates a weather-related event. Being a weather related event can be determined either by the computer receiving the indication from particular sources and/or by parse the received indication to produce a representation of the indication that identifies the type of weather-related event. The computer analyzes the parsed indication according to the location of the physical premises to produce a likely prediction of damage to the physical premises and produces the commands to modify the operation of specific sensors according to the likely prediction of damage to the physical premises. As an example, if the prediction predicts damage to a specific portion of the premises the sensors in that portion can have their operation modified.
- With any of the indications, but especially those that involve potential for catastrophic damage, the computer produces a request to upload service and usage data from specific monitored units within the premises to an external cloud based database, by sending to the system at the physical premises the request to upload to the external database, the service and usage data.
- When modifying operation of sensors, the computer can for video cameras, execute an algorithm that receives (or accesses from a database) current positioning information for each of the one or more video cameras and calculates based at least in part on the received indication repositioning information for the one or more video cameras. The computer sends the repositioning information to at least some of the one or more video cameras to modify operation of the one or more video cameras by repositioning the at least some of the one or more video cameras.
- Similarly, the computer can modify frame (or resolution or other parameters) by receiving (or accessing) current frame rate information for the video cameras. The frame rate information the frequency at which images are taken and sent by the video cameras. The computer calculates based at least in part on the received indication, modified frame rate information for the video cameras, and sends the modified frame rate information to the corresponding video cameras to modify the frame rate operation of such video cameras.
- The above techniques allow the intrusion detection system, or the like, for example, to capture video data prior to and after a potential identified incident, and combine the captured video with additional data feeds such that an insurance claim can be generated, validated and paid. The techniques involve increasing the “density” of sensor data surrounding the occurrence of an event such that the analysis of that data can be used by an insurance company to automate the evaluation of an insurance claim. This approach provides a minimal increase in overall processing and network traffic because the increase in sensor data is controlled by the system to occur over a time period prior to and after a potential identified incident.
- Often, insurance companies makes decision about certain types of claims without a significant amount of actual evidence of what happened leading up to the insurable incident. Using the proposed system, an insurance company can use an automated process that combines video data (such as automatically generated video primitives) with other available data streams such as weather data, seismic data, crime statistics, maintenance records, sensor data (intrusion, fire, vibration, temperature, humidity etc.), and other data that is specifically stored for a pre-set time before the incident and for a pre-set time after the incident. All of the data that is recorded is reduced to quantifiable values that can easily be evaluated with respect to ranges and/or coverages set out in an insurance policy. When the policy is set up, criteria for acceptable ranges of sensor data are also set up such that upon a system validating that the sensor data exceeded the established ranges, an immediate payment can be made.
- Referring now to
FIG. 11 , the augmented claim filing module 220 (FIG. 8 ) executes processing 220′ as shown. The sensor-basedstate prediction system 50 can be used in conjunction with an insurance claim module to populate and submit an insurance claim or at least supporting documentation upon an occurrence of an insured event. The insurance claim module in the sensor-basedstate prediction system 50 receivessensor data 222 for each physical object or physical quantity being monitored based on one or more sets of data from sensors (FIG. 2 ) or sensor packs (FIG. 16 ). Upon theoccurrence 224 of an event that results in an insurance claim, the insurance claim module 226 prepares an electronic report that can be used to supplement or provide the insurance claim. - The insurance claim module receives 228 a triggering message that triggers the insurance claim module to prepare an insurance claim(s) on for a business that suffered an insured loss. The insurance module is triggered by the sensor-based
state prediction system 50 detecting a state indicative of a loss or by an owner or owner system starting an insurance claim process. Upon receipt of the triggering message, the insurance claim module parses 230 the triggering message to extract information regarding the insured loss to extract identifying information regarding the premises that were insured, the nature of the loss, the insurance carrier, etc., as well as other generally conventional information. - From this extracted generally conventional information the insurance claim module constructs 232 a request to the sensor-based
state prediction system 50 to extract 236 service and usage data for one or more monitored units within the premises, and sends 234 the request to the sensor-basedstate prediction system 50. In particular, the sensor-basedstate prediction system 50 extracts service record data for each system within the premises, as well as states of the system/premises prior to the incident and/or actual sensor data of sensors monitoring the system/premises prior to the incident. - The insurance claim module generates 238 an insurance claim form from a template form used by the insurance carrier for the particular premises. The
insurance claim module 50 fills in the template with the conventional information such as the policy number, address, policyholder, etc. The insurance claim module however also provides 240 either in the template form or as a supplemental form, the extracted operational data for each specific piece of equipment based upon service and usage records retrieved from thedatabase 51 and sensor states prior to and subsequent to the insured event. The format of this supplemental form can take various configurations. One such configuration is shown inFIG. 11A . - Referring now to
FIG. 11A , the populated claim form (or the populated supplemental form, i.e., supporting documentation for the insurance claim form) is populated with the premises and system/equipment ID's and the extracted operational data that shows operational performance of the system/equipment ID before the event and after the event. The populated claim form or the populated supplemental form, also will show whether damaged, monitored systems were running properly, properly serviced etc., based on actual sensor data and historical service record data, as information provided are the actual conditions of the premises as measured by the sensor data and the calculated states as determined by the sensor basedprediction system 50 showing the events before the insured event happened and possibly during the insured event. This could benefit customer by yielding more accurate reimbursement of insurance funds depending on the type of insurance coverage. Thus inFIG. 11A a set of records are provided for historical state transitions (several before and during and after event, if any), sensor semantic records, and service records all pertaining to the specific ID equipment/system. - Various combinations of the above described processes are used to implement the features described.
- Servers interface to the sensor based
state prediction system 50 via a cloud computing configuration and parts of some networks can be run as sub-nets. In some embodiments, the sensors provide in addition to sensor data, detailed additional information that can be used in processing of sensor data evaluate. For example, a motion detector could be configured to analyze the heat signature of a warm body moving in a room to determine if the body is that of a human or a pet. Results of that analysis would be a message or data that conveys information about the body detected. Various sensors thus are used to sense sound, motion, vibration, pressure, heat, images, and so forth, in an appropriate combination to detect a true or verified alarm condition at the intrusion detection panel. - Recognition software can be used to discriminate between objects that area human and objects that are an animal; further facial recognition software can be built into video cameras and used to verify that the perimeter intrusion was the result of a recognized, authorized individual. Such video cameras would comprise a processor and memory and the recognition software to process inputs (captured images) by the camera and produce the metadata to convey information regarding recognition or lack of recognition of an individual captured by the video camera. The processing could also alternatively or in addition include information regarding characteristic of the individual in the area captured/monitored by the video camera. Thus, depending on the circumstances, the information would be either metadata received from enhanced motion detectors and video cameras that performed enhanced analysis on inputs to the sensor that gives characteristics of the perimeter intrusion or a metadata resulting from very complex processing that seeks to establish recognition of the object.
- Sensor devices can integrate multiple sensors to generate more complex outputs so that the intrusion detection panel can utilize its processing capabilities to execute algorithms that analyze the environment by building virtual images or signatures of the environment to make an intelligent decision about the validity of a breach.
- Memory stores program instructions and data used by the processor of the intrusion detection panel. The memory may be a suitable combination of random access memory and read-only memory, and may host suitable program instructions (e.g. firmware or operating software), and configuration and operating data and may be organized as a file system or otherwise. The stored program instruction may include one or more authentication processes for authenticating one or more users. The program instructions stored in the memory of the panel may further store software components allowing network communications and establishment of connections to the data network. The software components may, for example, include an internet protocol (IP) stack, as well as driver components for the various interfaces. Other software components suitable for establishing a connection and communicating across network will be apparent to those of ordinary skill.
- Program instructions stored in the memory, along with configuration data may control overall operation of the system. Servers include one or more processing devices (e.g., microprocessors), a network interface and a memory (all not illustrated). Servers may physically take the form of a rack mounted card and may be in communication with one or more operator terminals (not shown). An example monitoring server is a SURGARD™ SG-System III Virtual, or similar system.
- The processor of each monitoring server acts as a controller for each monitoring server, and is in communication with, and controls overall operation, of each server. The processor may include, or be in communication with, the memory that stores processor executable instructions controlling the overall operation of the monitoring server. Suitable software enable each monitoring server to receive alarms and cause appropriate actions to occur. Software may include a suitable Internet protocol (IP) stack and applications/clients.
- Each monitoring server of the central monitoring station may be associated with an IP address and port(s) by which it communicates with the control panels and/or the user devices to handle alarm events, etc. The monitoring server address may be static, and thus always identify a particular one of monitoring server to the intrusion detection panels. Alternatively, dynamic addresses could be used, and associated with static domain names, resolved through a domain name service.
- The network interface card interfaces with the network to receive incoming signals, and may for example take the form of an Ethernet network interface card (NIC). The servers may be computers, thin-clients, or the like, to which received data representative of an alarm event is passed for handling by human operators. The monitoring station may further include, or have access to, a subscriber database that includes a database under control of a database engine. The database may contain entries corresponding to the various subscriber devices/processes to panels like the panel that are serviced by the monitoring station.
- All or part of the processes described herein and their various modifications (hereinafter referred to as “the processes”) can be implemented, at least in part, via a computer program product, i.e., a computer program tangibly embodied in one or more tangible, physical hardware storage devices that are computer and/or machine-readable storage devices for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
- Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
- Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer (including a server) include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- Tangible, physical hardware storage devices that are suitable for embodying computer program instructions and data include all forms of non-volatile storage, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks and volatile computer memory, e.g., RAM such as static and dynamic RAM, as well as erasable memory, e.g., flash memory.
- In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other actions may be provided, or actions may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Likewise, actions depicted in the figures may be performed by different entities or consolidated.
- Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Elements may be left out of the processes, computer programs, Web pages, etc. described herein without adversely affecting their operation. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.
- Other implementations not specifically described herein are also within the scope of the following claims.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/071,722 US20210097624A1 (en) | 2016-06-06 | 2020-10-15 | Method and apparatus for increasing the density of data surrounding an event |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/173,795 US10810676B2 (en) | 2016-06-06 | 2016-06-06 | Method and apparatus for increasing the density of data surrounding an event |
US17/071,722 US20210097624A1 (en) | 2016-06-06 | 2020-10-15 | Method and apparatus for increasing the density of data surrounding an event |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/173,795 Continuation US10810676B2 (en) | 2016-06-06 | 2016-06-06 | Method and apparatus for increasing the density of data surrounding an event |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210097624A1 true US20210097624A1 (en) | 2021-04-01 |
Family
ID=59091565
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/173,795 Active 2037-08-25 US10810676B2 (en) | 2016-06-06 | 2016-06-06 | Method and apparatus for increasing the density of data surrounding an event |
US17/071,722 Abandoned US20210097624A1 (en) | 2016-06-06 | 2020-10-15 | Method and apparatus for increasing the density of data surrounding an event |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/173,795 Active 2037-08-25 US10810676B2 (en) | 2016-06-06 | 2016-06-06 | Method and apparatus for increasing the density of data surrounding an event |
Country Status (2)
Country | Link |
---|---|
US (2) | US10810676B2 (en) |
WO (1) | WO2017213918A1 (en) |
Families Citing this family (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9411327B2 (en) | 2012-08-27 | 2016-08-09 | Johnson Controls Technology Company | Systems and methods for classifying data in building automation systems |
US10534326B2 (en) | 2015-10-21 | 2020-01-14 | Johnson Controls Technology Company | Building automation system with integrated building information model |
US11268732B2 (en) | 2016-01-22 | 2022-03-08 | Johnson Controls Technology Company | Building energy management system with energy analytics |
US11947785B2 (en) | 2016-01-22 | 2024-04-02 | Johnson Controls Technology Company | Building system with a building graph |
US11768004B2 (en) | 2016-03-31 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | HVAC device registration in a distributed building management system |
US10417451B2 (en) | 2017-09-27 | 2019-09-17 | Johnson Controls Technology Company | Building system with smart entity personal identifying information (PII) masking |
US11774920B2 (en) | 2016-05-04 | 2023-10-03 | Johnson Controls Technology Company | Building system with user presentation composition based on building context |
US10505756B2 (en) | 2017-02-10 | 2019-12-10 | Johnson Controls Technology Company | Building management system with space graphs |
WO2018020306A1 (en) * | 2016-07-29 | 2018-02-01 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method for resource-aware and time-critical iot frameworks |
US10650329B2 (en) * | 2016-12-21 | 2020-05-12 | Hartford Fire Insurance Company | System to facilitate predictive analytic algorithm deployment in an enterprise |
US10684033B2 (en) | 2017-01-06 | 2020-06-16 | Johnson Controls Technology Company | HVAC system with automated device pairing |
US11900287B2 (en) | 2017-05-25 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Model predictive maintenance system with budgetary constraints |
US10515098B2 (en) | 2017-02-10 | 2019-12-24 | Johnson Controls Technology Company | Building management smart entity creation and maintenance using time series data |
US11307538B2 (en) | 2017-02-10 | 2022-04-19 | Johnson Controls Technology Company | Web services platform with cloud-eased feedback control |
US10452043B2 (en) | 2017-02-10 | 2019-10-22 | Johnson Controls Technology Company | Building management system with nested stream generation |
US11994833B2 (en) | 2017-02-10 | 2024-05-28 | Johnson Controls Technology Company | Building smart entity system with agent based data ingestion and entity creation using time series data |
US10854194B2 (en) | 2017-02-10 | 2020-12-01 | Johnson Controls Technology Company | Building system with digital twin based data ingestion and processing |
US11280509B2 (en) | 2017-07-17 | 2022-03-22 | Johnson Controls Technology Company | Systems and methods for agent based building simulation for optimal control |
US11360447B2 (en) | 2017-02-10 | 2022-06-14 | Johnson Controls Technology Company | Building smart entity system with agent based communication and control |
US11764991B2 (en) | 2017-02-10 | 2023-09-19 | Johnson Controls Technology Company | Building management system with identity management |
US10417245B2 (en) | 2017-02-10 | 2019-09-17 | Johnson Controls Technology Company | Building management system with eventseries processing |
US11042144B2 (en) | 2017-03-24 | 2021-06-22 | Johnson Controls Technology Company | Building management system with dynamic channel communication |
US11327737B2 (en) | 2017-04-21 | 2022-05-10 | Johnson Controls Tyco IP Holdings LLP | Building management system with cloud management of gateway configurations |
US10788229B2 (en) | 2017-05-10 | 2020-09-29 | Johnson Controls Technology Company | Building management system with a distributed blockchain database |
US11022947B2 (en) | 2017-06-07 | 2021-06-01 | Johnson Controls Technology Company | Building energy optimization system with economic load demand response (ELDR) optimization and ELDR user interfaces |
WO2018232147A1 (en) | 2017-06-15 | 2018-12-20 | Johnson Controls Technology Company | Building management system with artificial intelligence for unified agent based control of building subsystems |
WO2019018008A1 (en) | 2017-07-21 | 2019-01-24 | Johnson Controls Technology Company | Building management system with dynamic work order generation with adaptive diagnostic task details |
US11726632B2 (en) | 2017-07-27 | 2023-08-15 | Johnson Controls Technology Company | Building management system with global rule library and crowdsourcing framework |
US20190096214A1 (en) | 2017-09-27 | 2019-03-28 | Johnson Controls Technology Company | Building risk analysis system with geofencing for threats and assets |
US11768826B2 (en) | 2017-09-27 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Web services for creation and maintenance of smart entities for connected devices |
US11314788B2 (en) | 2017-09-27 | 2022-04-26 | Johnson Controls Tyco IP Holdings LLP | Smart entity management for building management systems |
WO2019067627A1 (en) | 2017-09-27 | 2019-04-04 | Johnson Controls Technology Company | Systems and methods for risk analysis |
US10962945B2 (en) | 2017-09-27 | 2021-03-30 | Johnson Controls Technology Company | Building management system with integration of data into smart entities |
US10809682B2 (en) | 2017-11-15 | 2020-10-20 | Johnson Controls Technology Company | Building management system with optimized processing of building system data |
US11281169B2 (en) | 2017-11-15 | 2022-03-22 | Johnson Controls Tyco IP Holdings LLP | Building management system with point virtualization for online meters |
US11127235B2 (en) | 2017-11-22 | 2021-09-21 | Johnson Controls Tyco IP Holdings LLP | Building campus with integrated smart environment |
US10928295B2 (en) | 2017-12-22 | 2021-02-23 | Honeywell International Inc. | Network assisted particulate matter sensor |
US11954713B2 (en) | 2018-03-13 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Variable refrigerant flow system with electricity consumption apportionment |
US11016648B2 (en) | 2018-10-30 | 2021-05-25 | Johnson Controls Technology Company | Systems and methods for entity visualization and management with an entity node editor |
US20200162280A1 (en) | 2018-11-19 | 2020-05-21 | Johnson Controls Technology Company | Building system with performance identification through equipment exercising and entity relationships |
US11769117B2 (en) | 2019-01-18 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building automation system with fault analysis and component procurement |
US10788798B2 (en) | 2019-01-28 | 2020-09-29 | Johnson Controls Technology Company | Building management system with hybrid edge-cloud processing |
US11082521B2 (en) * | 2019-05-24 | 2021-08-03 | California Eastern Laboratories, Inc. | Single source of information apparatuses, methods, and systems |
US12040911B2 (en) | 2019-12-31 | 2024-07-16 | Tyco Fire & Security Gmbh | Building data platform with a graph change feed |
US20210200713A1 (en) | 2019-12-31 | 2021-07-01 | Johnson Controls Technology Company | Systems and methods for generating a data structure from multiple bim files |
US12021650B2 (en) | 2019-12-31 | 2024-06-25 | Tyco Fire & Security Gmbh | Building data platform with event subscriptions |
US11769066B2 (en) | 2021-11-17 | 2023-09-26 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin triggers and actions |
US11894944B2 (en) | 2019-12-31 | 2024-02-06 | Johnson Controls Tyco IP Holdings LLP | Building data platform with an enrichment loop |
US12100280B2 (en) | 2020-02-04 | 2024-09-24 | Tyco Fire & Security Gmbh | Systems and methods for software defined fire detection and risk assessment |
US11537386B2 (en) | 2020-04-06 | 2022-12-27 | Johnson Controls Tyco IP Holdings LLP | Building system with dynamic configuration of network resources for 5G networks |
US11874809B2 (en) | 2020-06-08 | 2024-01-16 | Johnson Controls Tyco IP Holdings LLP | Building system with naming schema encoding entity type and entity relationships |
US11397773B2 (en) | 2020-09-30 | 2022-07-26 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US11954154B2 (en) | 2020-09-30 | 2024-04-09 | Johnson Controls Tyco IP Holdings LLP | Building management system with semantic model integration |
US20220138362A1 (en) | 2020-10-30 | 2022-05-05 | Johnson Controls Technology Company | Building management system with configuration by building model augmentation |
US12061453B2 (en) | 2020-12-18 | 2024-08-13 | Tyco Fire & Security Gmbh | Building management system performance index |
JP2024511974A (en) | 2021-03-17 | 2024-03-18 | ジョンソン・コントロールズ・タイコ・アイピー・ホールディングス・エルエルピー | System and method for determining equipment energy waste |
US11934521B2 (en) * | 2021-04-21 | 2024-03-19 | Sonalysts, Inc. | System and method of situation awareness in industrial control systems |
US11899723B2 (en) | 2021-06-22 | 2024-02-13 | Johnson Controls Tyco IP Holdings LLP | Building data platform with context based twin function processing |
US11796974B2 (en) | 2021-11-16 | 2023-10-24 | Johnson Controls Tyco IP Holdings LLP | Building data platform with schema extensibility for properties and tags of a digital twin |
US11934966B2 (en) | 2021-11-17 | 2024-03-19 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin inferences |
US11704311B2 (en) | 2021-11-24 | 2023-07-18 | Johnson Controls Tyco IP Holdings LLP | Building data platform with a distributed digital twin |
US12013673B2 (en) | 2021-11-29 | 2024-06-18 | Tyco Fire & Security Gmbh | Building control system using reinforcement learning |
US11714930B2 (en) | 2021-11-29 | 2023-08-01 | Johnson Controls Tyco IP Holdings LLP | Building data platform with digital twin based inferences and predictions for a graphical building model |
US12013823B2 (en) | 2022-09-08 | 2024-06-18 | Tyco Fire & Security Gmbh | Gateway system that maps points into a graph schema |
US12061633B2 (en) | 2022-09-08 | 2024-08-13 | Tyco Fire & Security Gmbh | Building system that maps points into a graph schema |
CN116318783B (en) * | 2022-12-05 | 2023-08-22 | 浙江大学 | Network industrial control equipment safety monitoring method and device based on safety index |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040263625A1 (en) * | 2003-04-22 | 2004-12-30 | Matsushita Electric Industrial Co., Ltd. | Camera-linked surveillance system |
US20100063648A1 (en) * | 2008-09-11 | 2010-03-11 | Noel Wayne Anderson | Distributed knowledge base program for vehicular localization and work-site management |
US20110307221A1 (en) * | 2010-06-10 | 2011-12-15 | Hach Company | Blind logger dynamic caller |
US9892463B1 (en) * | 2014-04-25 | 2018-02-13 | State Farm Mutual Automobile Insurance Company | System and methods for community-based cause of loss determination |
US10042341B1 (en) * | 2015-02-19 | 2018-08-07 | State Farm Mutual Automobile Insurance Company | Systems and methods for monitoring building health |
Family Cites Families (124)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7242988B1 (en) | 1991-12-23 | 2007-07-10 | Linda Irene Hoffberg | Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore |
US5453733A (en) | 1992-07-20 | 1995-09-26 | Digital Security Controls Ltd. | Intrusion alarm with independent trouble evaluation |
US5708423A (en) | 1995-05-09 | 1998-01-13 | Sensormatic Electronics Corporation | Zone-Based asset tracking and control system |
JP3748595B2 (en) | 1995-06-30 | 2006-02-22 | 本田技研工業株式会社 | Anti-lock brake control device for vehicle |
US5587704A (en) | 1995-09-01 | 1996-12-24 | Foster; Samuel T. | Code blue light audio and visual alarm apparatus |
US5825283A (en) | 1996-07-03 | 1998-10-20 | Camhi; Elie | System for the security and auditing of persons and property |
US5862201A (en) | 1996-09-12 | 1999-01-19 | Simplex Time Recorder Company | Redundant alarm monitoring system |
US20060287783A1 (en) | 1998-01-15 | 2006-12-21 | Kline & Walker Llc | Automated accounting system that values, controls, records and bills the uses of equipment/vehicles for society |
US7028005B2 (en) | 1999-12-30 | 2006-04-11 | Ge Capital Commercial Finance, Inc. | Methods and systems for finding value and reducing risk |
US6853920B2 (en) | 2000-03-10 | 2005-02-08 | Smiths Detection-Pasadena, Inc. | Control for an industrial process using one or more multidimensional variables |
US20010053963A1 (en) | 2000-06-16 | 2001-12-20 | Lg Electronics Inc. | Refrigerator and method for controlling the same |
US6720874B2 (en) | 2000-09-29 | 2004-04-13 | Ids Systems, Inc. | Portal intrusion detection apparatus and method |
AU2002243431A1 (en) | 2000-10-23 | 2002-06-24 | Deloitte And Touche Llp | Commercial insurance scoring system and method |
US7233886B2 (en) | 2001-01-19 | 2007-06-19 | Smartsignal Corporation | Adaptive modeling of changed states in predictive condition monitoring |
US7253732B2 (en) | 2001-09-10 | 2007-08-07 | Osann Jr Robert | Home intrusion confrontation avoidance system |
JP3996428B2 (en) | 2001-12-25 | 2007-10-24 | 松下電器産業株式会社 | Abnormality detection device and abnormality detection system |
US7685029B2 (en) | 2002-01-25 | 2010-03-23 | Invensys Systems Inc. | System and method for real-time activity-based accounting |
US20040150519A1 (en) | 2003-01-31 | 2004-08-05 | Iftikhar Husain | System and method for monitoring having an embedded device |
CA2519693A1 (en) | 2003-03-27 | 2004-10-14 | University Of Washington | Performing predictive pricing based on historical data |
US7711584B2 (en) | 2003-09-04 | 2010-05-04 | Hartford Fire Insurance Company | System for reducing the risk associated with an insured building structure through the incorporation of selected technologies |
US9311676B2 (en) | 2003-09-04 | 2016-04-12 | Hartford Fire Insurance Company | Systems and methods for analyzing sensor data |
US6911907B2 (en) | 2003-09-26 | 2005-06-28 | General Electric Company | System and method of providing security for a site |
US7109861B2 (en) | 2003-11-26 | 2006-09-19 | International Business Machines Corporation | System and method for alarm generation based on the detection of the presence of a person |
US7630933B2 (en) | 2004-02-20 | 2009-12-08 | Horizon Digital Finance, Llc | System and method for matching loan consumers and lenders |
US9729342B2 (en) | 2010-12-20 | 2017-08-08 | Icontrol Networks, Inc. | Defining and implementing sensor triggered response rules |
US7697026B2 (en) | 2004-03-16 | 2010-04-13 | 3Vr Security, Inc. | Pipeline architecture for analyzing multiple video streams |
US9609003B1 (en) | 2007-06-12 | 2017-03-28 | Icontrol Networks, Inc. | Generating risk profile using data of home monitoring and security system |
US7543144B2 (en) | 2004-07-21 | 2009-06-02 | Beachhead Solutions | System and method for lost data destruction of electronic data stored on portable electronic devices |
US20060033625A1 (en) | 2004-08-11 | 2006-02-16 | General Electric Company | Digital assurance method and system to extend in-home living |
US7944469B2 (en) | 2005-02-14 | 2011-05-17 | Vigilos, Llc | System and method for using self-learning rules to enable adaptive security monitoring |
US9450776B2 (en) | 2005-03-16 | 2016-09-20 | Icontrol Networks, Inc. | Forming a security network including integrated security system components |
US7277823B2 (en) | 2005-09-26 | 2007-10-02 | Lockheed Martin Corporation | Method and system of monitoring and prognostics |
US7738975B2 (en) | 2005-10-04 | 2010-06-15 | Fisher-Rosemount Systems, Inc. | Analytical server integrated in a process control network |
US7420472B2 (en) | 2005-10-16 | 2008-09-02 | Bao Tran | Patient monitoring apparatus |
US20150187192A1 (en) | 2005-12-08 | 2015-07-02 | Costa Verdi, Series 63 Of Allied Security Trust I | System and method for interactive security |
US20080294690A1 (en) | 2007-05-22 | 2008-11-27 | Mcclellan Scott | System and Method for Automatically Registering a Vehicle Monitoring Device |
US8666936B2 (en) | 2006-10-05 | 2014-03-04 | Trimble Navigation Limited | System and method for asset management |
EP1921572A1 (en) | 2006-11-09 | 2008-05-14 | Max J. Pucher | Method for training a system to specifically react on a specific input |
US7933666B2 (en) | 2006-11-10 | 2011-04-26 | Rockwell Automation Technologies, Inc. | Adjustable data collection rate for embedded historians |
WO2008082441A1 (en) | 2006-12-29 | 2008-07-10 | Prodea Systems, Inc. | Display inserts, overlays, and graphical user interfaces for multimedia systems |
NZ554060A (en) | 2007-03-21 | 2008-11-28 | Waikatolink Ltd | Sensor assembly |
US7696866B2 (en) | 2007-06-28 | 2010-04-13 | Microsoft Corporation | Learning and reasoning about the context-sensitive reliability of sensors |
WO2009012289A1 (en) | 2007-07-16 | 2009-01-22 | Cernium Corporation | Apparatus and methods for video alarm verification |
US10120105B2 (en) | 2007-10-23 | 2018-11-06 | La Crosse Technology Ltd. | Location monitoring via a gateway |
BRPI0910573B1 (en) | 2008-04-17 | 2019-09-03 | The Travelers Indemnity Company | system for processing a property insurance claim |
US8081795B2 (en) | 2008-05-09 | 2011-12-20 | Hartford Fire Insurance Company | System and method for assessing a condition of property |
JP4603596B2 (en) | 2008-05-16 | 2010-12-22 | 本田技研工業株式会社 | Body flow restraint device |
US9235214B2 (en) | 2008-09-11 | 2016-01-12 | Deere & Company | Distributed knowledge base method for vehicular localization and work-site management |
US8463699B2 (en) | 2008-10-14 | 2013-06-11 | American International Group | Method and system of determining and applying insurance profit scores |
US20100134285A1 (en) | 2008-12-02 | 2010-06-03 | Honeywell International Inc. | Method of sensor data fusion for physical security systems |
CA2662431A1 (en) | 2009-02-24 | 2010-08-24 | The Business Accelerators Inc. | Biometric characterizing system and method and apparel linking system and method |
FR2951839B1 (en) | 2009-10-23 | 2021-06-11 | Commissariat Energie Atomique | METHOD FOR EVALUATING THE RESEMBLANCE OF A REQUIRED OBJECT TO REFERENCE OBJECTS |
US8650048B1 (en) | 2010-04-28 | 2014-02-11 | United Services Automobile Association (Usaa) | Method and system for insuring real property in wildfire prone areas |
US8660979B2 (en) | 2011-03-03 | 2014-02-25 | Hewlett-Packard Development Company, L.P. | Event prediction |
US20130027561A1 (en) | 2011-07-29 | 2013-01-31 | Panasonic Corporation | System and method for improving site operations by detecting abnormalities |
US9142108B2 (en) | 2011-09-01 | 2015-09-22 | Ecolink Intelligent Technology, Inc. | Security apparatus and method |
US20130091213A1 (en) | 2011-10-08 | 2013-04-11 | Broadcom Corporation | Management of social device interaction with social network infrastructure |
US20130201316A1 (en) | 2012-01-09 | 2013-08-08 | May Patents Ltd. | System and method for server based control |
KR20140121845A (en) | 2012-01-13 | 2014-10-16 | 펄스 펑션 에프6 리미티드 | Telematics system with 3d inertial sensors |
US20130218603A1 (en) | 2012-02-21 | 2013-08-22 | Elwha Llc | Systems and methods for insurance based upon characteristics of a collision detection system |
US8710983B2 (en) | 2012-05-07 | 2014-04-29 | Integrated Security Corporation | Intelligent sensor network |
US8928476B2 (en) | 2012-05-17 | 2015-01-06 | Honeywell International Inc. | System for advanced security management |
US20140006060A1 (en) | 2012-06-27 | 2014-01-02 | Hartford Fire Insurance Company | System and method for processing data related to worksite analyses |
US10881339B2 (en) | 2012-06-29 | 2021-01-05 | Dexcom, Inc. | Use of sensor redundancy to detect sensor failures |
US9412067B2 (en) | 2012-09-05 | 2016-08-09 | Numenta, Inc. | Anomaly detection in spatial and temporal memory system |
US20140136242A1 (en) | 2012-11-12 | 2014-05-15 | State Farm Mutual Automobile Insurance Company | Home sensor data gathering for insurance rating purposes |
US8760285B2 (en) | 2012-11-15 | 2014-06-24 | Wildfire Defense Systems, Inc. | Wildfire risk assessment |
WO2014151956A1 (en) | 2013-03-14 | 2014-09-25 | Flir Systems, Inc. | Wind sensor motion compensation systems and methods |
JP6008124B2 (en) | 2013-02-18 | 2016-10-19 | 株式会社デンソー | Vehicle orientation detection method and vehicle orientation detection device |
JP6098211B2 (en) | 2013-02-18 | 2017-03-22 | 株式会社デンソー | Vehicle trajectory calculation method |
US20140246502A1 (en) | 2013-03-04 | 2014-09-04 | Hello Inc. | Wearable devices with magnets encased by a material that redistributes their magnetic fields |
US9262906B2 (en) | 2013-03-14 | 2016-02-16 | Comcast Cable Communications, Llc | Processing sensor data |
US20140266592A1 (en) | 2013-03-15 | 2014-09-18 | Digi International Inc. | Network gateway system and method |
US20140279707A1 (en) | 2013-03-15 | 2014-09-18 | CAA South Central Ontario | System and method for vehicle data analysis |
US20140278573A1 (en) * | 2013-03-15 | 2014-09-18 | State Farm Mutual Automobile Insurance Company | Systems and methods for initiating insurance processing using ingested data |
US10268660B1 (en) | 2013-03-15 | 2019-04-23 | Matan Arazi | Real-time event transcription system and method |
US9764468B2 (en) | 2013-03-15 | 2017-09-19 | Brain Corporation | Adaptive predictor apparatus and methods |
US10740358B2 (en) | 2013-04-11 | 2020-08-11 | Oracle International Corporation | Knowledge-intensive data processing system |
AU2014257036A1 (en) | 2013-04-23 | 2015-11-12 | Canary Connect, Inc. | Security and/or monitoring devices and systems |
US20140358592A1 (en) | 2013-05-31 | 2014-12-04 | OneEvent Technologies, LLC | Sensors for usage-based property insurance |
US10002339B2 (en) | 2013-07-11 | 2018-06-19 | Fluor Technologies Corporation | Post-disaster assessment systems and methods |
US20150025917A1 (en) | 2013-07-15 | 2015-01-22 | Advanced Insurance Products & Services, Inc. | System and method for determining an underwriting risk, risk score, or price of insurance using cognitive information |
US9053516B2 (en) | 2013-07-15 | 2015-06-09 | Jeffrey Stempora | Risk assessment using portable devices |
US9257030B2 (en) | 2013-07-16 | 2016-02-09 | Leeo, Inc. | Electronic device with environmental monitoring |
EP3030879A4 (en) | 2013-08-09 | 2018-01-03 | CNRY Inc. | System and methods for monitoring an environment |
US9710858B1 (en) * | 2013-08-16 | 2017-07-18 | United Services Automobile Association (Usaa) | Insurance policy alterations using informatic sensor data |
EP2843636B1 (en) | 2013-08-23 | 2018-06-13 | E.I. Technology | Monitoring and control of alarm systems |
US9319421B2 (en) | 2013-10-14 | 2016-04-19 | Ut-Battelle, Llc | Real-time detection and classification of anomalous events in streaming data |
WO2015061712A1 (en) | 2013-10-24 | 2015-04-30 | Tourmaline Labs, Inc. | Systems and methods for collecting and transmitting telematics data from a mobile device |
AU2014357307A1 (en) | 2013-11-26 | 2016-07-14 | 9069569 Canada Inc. | System and method for providing subscribers a secure electronic emergency response portal on a network |
US9753796B2 (en) | 2013-12-06 | 2017-09-05 | Lookout, Inc. | Distributed monitoring, evaluation, and response for multiple devices |
US9495860B2 (en) | 2013-12-11 | 2016-11-15 | Echostar Technologies L.L.C. | False alarm identification |
US9870697B2 (en) | 2013-12-17 | 2018-01-16 | At&T Mobility Ii Llc | Method, computer-readable storage device and apparatus for providing a collaborative standalone area monitor |
DE202014011530U1 (en) | 2013-12-27 | 2021-12-17 | Abbott Diabetes Care, Inc. | Systems and devices for authentication in an analyte monitoring environment |
WO2015123604A1 (en) | 2014-02-17 | 2015-08-20 | Tourmaline Labs, Inc. | Systems and methods for estimating movements of a vehicle using a mobile device |
US9384656B2 (en) | 2014-03-10 | 2016-07-05 | Tyco Fire & Security Gmbh | False alarm avoidance in security systems filtering low in network |
US9753767B2 (en) | 2014-03-11 | 2017-09-05 | Sas Institute Inc. | Distributed data set task selection |
US10169720B2 (en) | 2014-04-17 | 2019-01-01 | Sas Institute Inc. | Systems and methods for machine learning using classifying, clustering, and grouping time series data |
US9754325B1 (en) | 2014-05-20 | 2017-09-05 | State Farm Mutual Automobile Insurance Company | Autonomous vehicle operation feature monitoring and evaluation of effectiveness |
US20160029966A1 (en) | 2014-07-31 | 2016-02-04 | Sano Intelligence, Inc. | Method and system for processing and analyzing analyte sensor signals |
US10084638B2 (en) | 2014-08-13 | 2018-09-25 | Tyco Safety Products Canada Ltd. | Method and apparatus for automation and alarm architecture |
US20160048580A1 (en) | 2014-08-14 | 2016-02-18 | Verizon Patent And Licensing Inc. | Method and system for providing delegated classification and learning services |
US10043211B2 (en) | 2014-09-08 | 2018-08-07 | Leeo, Inc. | Identifying fault conditions in combinations of components |
US10249158B1 (en) | 2014-10-07 | 2019-04-02 | State Farm Mutual Automobile Insurance Company | Systems and methods for automatically responding to a fire |
US20160110833A1 (en) | 2014-10-16 | 2016-04-21 | At&T Mobility Ii Llc | Occupancy Indicator |
US9613523B2 (en) | 2014-12-09 | 2017-04-04 | Unilectric, Llc | Integrated hazard risk management and mitigation system |
US9786012B2 (en) | 2014-12-16 | 2017-10-10 | Hartford Fire Insurance Company | Calibrated underwriting system |
EP4343728A3 (en) | 2014-12-30 | 2024-06-19 | Alarm.com Incorporated | Digital fingerprint tracking |
US10530608B2 (en) | 2015-03-10 | 2020-01-07 | Elemental Machines, Inc. | Method and apparatus for environmental sensing |
US10411637B2 (en) | 2015-06-04 | 2019-09-10 | Silverback Advanced Motor Monitoring, LLC | Electrical pattern monitor |
CA3128629A1 (en) | 2015-06-05 | 2016-07-28 | C3.Ai, Inc. | Systems and methods for data processing and enterprise ai applications |
US20170004226A1 (en) | 2015-07-05 | 2017-01-05 | Sas Institute Inc. | Stress testing by avoiding simulations |
US20170011465A1 (en) | 2015-07-08 | 2017-01-12 | Here Global B.V. | Method and apparatus for providing fee rate based on safety score |
US10522031B2 (en) | 2015-09-01 | 2019-12-31 | Honeywell International Inc. | System and method providing early prediction and forecasting of false alarms by applying statistical inference models |
US10425702B2 (en) | 2015-09-30 | 2019-09-24 | Sensormatic Electronics, LLC | Sensor packs that are configured based on business application |
US10354332B2 (en) | 2015-09-30 | 2019-07-16 | Sensormatic Electronics, LLC | Sensor based system and method for drift analysis to predict equipment failure |
US11436911B2 (en) | 2015-09-30 | 2022-09-06 | Johnson Controls Tyco IP Holdings LLP | Sensor based system and method for premises safety and operational profiling based on drift analysis |
US20170091867A1 (en) | 2015-09-30 | 2017-03-30 | Sensormatic Electronics, LLC | Sensor Based System And Method For Determining Allocation Based On Physical Proximity |
US10902524B2 (en) | 2015-09-30 | 2021-01-26 | Sensormatic Electronics, LLC | Sensor based system and method for augmenting underwriting of insurance policies |
US11151654B2 (en) | 2015-09-30 | 2021-10-19 | Johnson Controls Tyco IP Holdings LLP | System and method for determining risk profile, adjusting insurance premiums and automatically collecting premiums based on sensor data |
US10296979B2 (en) | 2015-09-30 | 2019-05-21 | Sensormatic Electronics, LLC | Sensor based system and method for augmenting insurance claim filing |
US20170308802A1 (en) * | 2016-04-21 | 2017-10-26 | Arundo Analytics, Inc. | Systems and methods for failure prediction in industrial environments |
US10380521B2 (en) | 2016-06-06 | 2019-08-13 | Tyco Integrated Security Llc | Predicting service for intrusion and alarm systems based on signal activity patterns |
US9996078B1 (en) | 2017-04-07 | 2018-06-12 | Pratt & Whitney Canada Corp. | Pre-emptive fault detection through advanced signal analysis |
-
2016
- 2016-06-06 US US15/173,795 patent/US10810676B2/en active Active
-
2017
- 2017-05-31 WO PCT/US2017/035091 patent/WO2017213918A1/en active Application Filing
-
2020
- 2020-10-15 US US17/071,722 patent/US20210097624A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040263625A1 (en) * | 2003-04-22 | 2004-12-30 | Matsushita Electric Industrial Co., Ltd. | Camera-linked surveillance system |
US20100063648A1 (en) * | 2008-09-11 | 2010-03-11 | Noel Wayne Anderson | Distributed knowledge base program for vehicular localization and work-site management |
US20110307221A1 (en) * | 2010-06-10 | 2011-12-15 | Hach Company | Blind logger dynamic caller |
US9892463B1 (en) * | 2014-04-25 | 2018-02-13 | State Farm Mutual Automobile Insurance Company | System and methods for community-based cause of loss determination |
US10042341B1 (en) * | 2015-02-19 | 2018-08-07 | State Farm Mutual Automobile Insurance Company | Systems and methods for monitoring building health |
Also Published As
Publication number | Publication date |
---|---|
US10810676B2 (en) | 2020-10-20 |
US20170352102A1 (en) | 2017-12-07 |
WO2017213918A1 (en) | 2017-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210097624A1 (en) | Method and apparatus for increasing the density of data surrounding an event | |
US11250516B2 (en) | Method and apparatus for evaluating risk based on sensor monitoring | |
US20220076347A1 (en) | Systems and methods for detecting a drift state | |
US20210217097A1 (en) | Sensor based system and method for augmenting underwriting of insurance policies | |
US10359771B2 (en) | Prediction of false alarms in sensor-based security systems | |
US10354332B2 (en) | Sensor based system and method for drift analysis to predict equipment failure | |
US11068994B2 (en) | Sensor based system and method for augmenting insurance claim filing | |
US11436911B2 (en) | Sensor based system and method for premises safety and operational profiling based on drift analysis | |
US10425702B2 (en) | Sensor packs that are configured based on business application | |
US11037420B2 (en) | Method and apparatus for tiered analytics in a multi-sensor environment | |
US20170091867A1 (en) | Sensor Based System And Method For Determining Allocation Based On Physical Proximity | |
US12019437B2 (en) | Web services platform with cloud-based feedback control | |
US10524027B2 (en) | Sensor based system and method for premises safety and operational profiling based on drift analysis | |
JP2017511544A (en) | Person authentication and tracking system | |
CN107077472A (en) | Distributed processing system(DPS) | |
KR20100000151A (en) | Context-aware engine and price policy method of software |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SENSORMATIC ELECTRONICS, LLC, FLORIDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VAVRASEK, DAVID;REEL/FRAME:054108/0871 Effective date: 20160602 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: JOHNSON CONTROLS TYCO IP HOLDINGS LLP, WISCONSIN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JOHNSON CONTROLS INC;REEL/FRAME:058600/0126 Effective date: 20210617 Owner name: JOHNSON CONTROLS INC, WISCONSIN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JOHNSON CONTROLS US HOLDINGS LLC;REEL/FRAME:058600/0080 Effective date: 20210617 Owner name: JOHNSON CONTROLS US HOLDINGS LLC, WISCONSIN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SENSORMATIC ELECTRONICS LLC;REEL/FRAME:058600/0001 Effective date: 20210617 |
|
AS | Assignment |
Owner name: JOHNSON CONTROLS US HOLDINGS LLC, WISCONSIN Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:SENSORMATIC ELECTRONICS, LLC;REEL/FRAME:058957/0138 Effective date: 20210806 Owner name: JOHNSON CONTROLS TYCO IP HOLDINGS LLP, WISCONSIN Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:JOHNSON CONTROLS, INC.;REEL/FRAME:058955/0472 Effective date: 20210806 Owner name: JOHNSON CONTROLS, INC., WISCONSIN Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:JOHNSON CONTROLS US HOLDINGS LLC;REEL/FRAME:058955/0394 Effective date: 20210806 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |