[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20190025435A1 - Cyber-physical system defense - Google Patents

Cyber-physical system defense Download PDF

Info

Publication number
US20190025435A1
US20190025435A1 US15/918,787 US201815918787A US2019025435A1 US 20190025435 A1 US20190025435 A1 US 20190025435A1 US 201815918787 A US201815918787 A US 201815918787A US 2019025435 A1 US2019025435 A1 US 2019025435A1
Authority
US
United States
Prior art keywords
sensor
sensors
disagreement
disagreements
cyber
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/918,787
Inventor
Barry Horowitz
Joseph Vince Pulido
Rick A. Jones
Edward C. Suhler
Ronald Dean Williams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UVA Licensing and Ventures Group
University of Virginia UVA
Original Assignee
University of Virginia Patent Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Virginia Patent Foundation filed Critical University of Virginia Patent Foundation
Priority to US15/918,787 priority Critical patent/US20190025435A1/en
Assigned to UNIVERSITY OF VIRGINIA reassignment UNIVERSITY OF VIRGINIA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JONES, RICK A., SUHLER, EDWARD C., PULIDO, JOSEPH VINCE, WILLIAMS, RONALD DEAN, HOROWITZ, BARRY
Assigned to UNIVERSITY OF VIRGINIA PATENT FOUNDATION reassignment UNIVERSITY OF VIRGINIA PATENT FOUNDATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: UNIVERSITY OF VIRGINIA
Publication of US20190025435A1 publication Critical patent/US20190025435A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/12Detection or prevention of fraud
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/12Detection or prevention of fraud
    • H04W12/121Wireless intrusion detection systems [WIDS]; Wireless intrusion prevention systems [WIPS]
    • H04W12/122Counter-measures against attacks; Protection against rogue devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/12Detection or prevention of fraud
    • H04W12/128Anti-malware arrangements, e.g. protection against SMS fraud or mobile malware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/48Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for in-vehicle communication
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S40/00Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
    • Y04S40/18Network protocols supporting networked applications, e.g. including control of end-device applications over a network

Definitions

  • Embodiments described herein generally relate to system security and more specifically to cyber-physical system defense.
  • Cyber-physical systems combine computational, communication, sensory and control capabilities to monitor and regulate physical domain processes.
  • Cyber-physical systems broadly focus on monitoring and controlling a physical process, and may include capabilities to: sense the physical world (e.g., the position of a valve controlling a tank filling process); make decisions (e.g., whether it is necessary to open or close the valve); and perform actions in physical world (e.g. open or close valve to maintain tank fluid level).
  • Cyber-physical systems are becoming increasingly prevalent, filing roles in the civilian (e.g., power grid, public utility services, financial infrastructure, etc.) and defense (e.g., search and rescue missions and command, control, and conquer (C3) systems) spaces.
  • civilian e.g., power grid, public utility services, financial infrastructure, etc.
  • defense e.g., search and rescue missions and command, control, and conquer (C3) systems
  • Cyber-physical systems are becoming increasingly accessible to attackers via increased network access to communication with control rooms, command and control stations, other computer based systems and networks such as the Internet.
  • Examples of cyber-physical systems include transportation networks, unmanned aerial vehicles (UAV's), nuclear power generation, electric power distribution networks, water and gas distribution networks, and advanced communication systems.
  • UAV's unmanned aerial vehicles
  • Current technology has often introduced the capability of integrating information from numerous instrumentation and control systems and transmitting the information to operations personnel in a timely manner.
  • FIG. 1 is a block diagram of an example of an environment including a system for cyber-physical defense, according to an embodiment.
  • FIGS. 2A and 2B illustrates an example of a disagreement signal cluster analysis timeline, according to an embodiment.
  • FIG. 3 illustrates an example of a deviating sensor decision tree, according to an embodiment.
  • FIG. 4 illustrates an example of a method for cyber-physical defense, according to an embodiment.
  • FIG. 5 is a block diagram illustrating an example of a machine upon which one or more embodiments may be implemented.
  • perimeter defense such as such as firewalls, intrusion detection mechanisms, anti-viral signature software, encryption, and advanced user authentication—designed to prevent an attacker from gaining control of the cyber-physical system.
  • perimeter security technologies have been used to address attacks, the rate of successful attacks against critical infrastructures continues to be increasingly problematic.
  • adversarial attacks is moving toward well-formed coordinated multi-vector attacks that compromise the system in such a way that detection and identification is challenging for perimeter security solutions.
  • an asymmetrical conflict arises where defending against attacks is expensive while actually performing attacks is becomingly increasingly inexpensive. That is, the attacker may take time to probe a defense perimeter, identify a weak point, and exploit it while the defender must spend inordinate resources to identify and fixe weak points ahead of time while being giving very little time to address an attack on an overlooked weak point.
  • the behavior of multiple redundant (in both type and number) sensors can be used to produce a robust platform for the cyber-physical system control. Further, disagreements between the sensors can be analyzed to identify periods in which an attack is likely, such as when noted disagreements cluster together. Such a cluster analysis addresses the practical reality that disparate sensors may not agree at all times, however, are likely to reflect a pattern of disagreement during normative (e.g., normal and not under attack) operations. Further, to address whether or not a single disagreement has occurred between two disparate sensors, the data from these sensors can be statistically analyzed to determine whether it deviates from a known probability distribution (e.g., a random normal distribution).
  • a known probability distribution e.g., a random normal distribution
  • a cyber-physical system includes robust and ongoing defense of attacks that may have breached other defense mechanisms, such as perimeter security mechanisms.
  • FIG. 1 is a block diagram of an example of an environment 100 including a system 145 for cyber-physical defense, according to an embodiment.
  • the following discussion generally uses the example of a simplified UAV navigation control system. However, the discussed techniques are applicable to other cyber-physical control systems as long as a known probability distribution between sensor readings can be established.
  • the environment 100 includes a UAV 105 with a control system 110 .
  • the control system 110 can include a navigation component 130 , a sensor array 115 , and a system defense component 145 .
  • the navigation component 103 can include a position estimate module 135 that takes an inertial navigation system output and feedback from a Kalman filter 140 as inputs and produces a navigation signal that is combined with a satellite navigation system (e.g., global position system, GLONASS, Galileo, Beidou, etc.) output and used in the Kalman filter 140 to produce navigation resolution for the UAV 105 .
  • the navigation resolution can be used to determined which actuators of the UAV to manipulate and how they are to be manipulated to pilot the IAV 105 .
  • the sensor array 115 can include a plurality of sensors, such as inertial navigation sensors INS- 1 120 A and INS- 2 120 B and satellite navigation system sensors SNS- 1 125 A and SNS- 2 125 B.
  • the plurality of sensors in the sensor array 115 can include a number of subsets, such as subsets based on homogeneous type (e.g., inertial navigation system as a first type and satellite navigation system as a second type).
  • the system defense component 145 can include a sensor interface 150 .
  • the sensor interface 150 includes hardware to communicatively receive or retrieve data from the sensors 120 - 125 in the sensor array 115 .
  • the sensor interface 150 also includes hardware to provide the sensor data to other components of the system defense component, such as the sampling circuit set 155 , the cluster circuit set 160 , or the alert circuit set 165 .
  • the sampling circuit set 155 can sample sensor disagreements between the plurality of sensors over time.
  • to sample sensor disagreements between the plurality of sensors over time can include obtaining a first data set for a first sensor set, obtain a second data set for a second sensor set determine a disagreement between the first data set and the second data set.
  • Such result comparisons for sensor disagreements can be performed in a number of ways. For example, the result of a first sensor can be directly compared with the result of a second sensor and when the results do not match, either exactly or within a threshold, a disagreement is registered.
  • sensor diversity in sourcing e.g., who makes the sensor
  • operation e.g., what hardware is used
  • This diversity may include diverse operating characteristics, making direct result comparison a less reliable indication of actual sensor disagreement. This issue may be compounded when disagreements between different types of sensors are sought.
  • determining the disagreement between the first data set and the second data set can include combining the first data set with the second data set to create a measurement data set, calculating a set of residuals for the measurement data set, and determining that a mean for the residuals is beyond a threshold in relation to an expected mean for the residuals.
  • a statistical model is applied to the differences between the two data sets to ascertain whether they are behaving as they should.
  • the state of the UAV 105 navigation can be modeled for multiple satellite navigation system sensors 125 and inertial navigation system sensors 125 as a linear time-invariant stochastic system with Gaussian noise (which accounts for modeling errors, uncertainties, or system external perturbations).
  • a linear time-invariant stochastic system with Gaussian noise which accounts for modeling errors, uncertainties, or system external perturbations.
  • X a (k) ⁇ l , u(k) ⁇ p , z a (k) ⁇ t are the system state, inputs of the inertial navigations system units, and the measurement of the satellite navigation units, and where w j (k) ⁇ q (k), v i (k) ⁇ t Mare process and measurement noise.
  • the w(k) and v(k) components are Gaussian white noise of the inertial navigation system and satellite navigation system measurements respectively, with constant covariance matrices Q and R.
  • A [ 1 T S 0 0 0 1 0 0 0 0 1 T S 0 0 0 1 ]
  • B [ T S 2 2 0 T S 0 0 T S 2 2 0 T S ]
  • C [ 1 0 0 0 0 1 0 ]
  • ⁇ T [ T S 2 2 T S T S 2 2 T S ]
  • B o , B c are the attack matrices and a o (k), a c (k) are persistent, linear deception attacks against the satellite navigation systems and inertial navigation systems at time k respectively.
  • Similarity between two inertial navigation system components can look a velocity or acceleration residuals between the two components. Selection of particular sensor outputs for comparison, rather than all components, can alleviate identified senor issues, such as compounding drift in inertial navigation sensors.
  • the residual of the acceleration measurement for the two inertial navigation system components is:
  • r 1 (k) loses its non-zero Gaussian characteristic.
  • a valid test for sensor deviation e.g., based on a security intrusion is to test the non-zero mean normality of the residuals, which can be performed with the compound scalar test. Since r(k) is a bivariate standard normally distributed random variable, let
  • the application of the above technique operates in the same way for disagreement (e.g., similarity) testing of satellite navigation system sensors and between inertial navigation system sensors and satellite navigation system sensors because the residuals are Gaussian distributed random variables.
  • the residuals of the satellite navigation system measurements can be used.
  • the residual is
  • the position residuals (respectively represented above as z a (N) and x a (M) ) can be used.
  • the residual of the inertial navigation system sensor and the satellite navigation sensor is:
  • CAx a (1) ( k ⁇ 1)+ CBu ( k ⁇ 1) Cx a ( k )
  • r 3 (k) is a Gaussian distributed random variable, with covariance
  • ⁇ r 3 c ⁇ T T C T Q (1) C T ⁇ TC+R (1)
  • the first sensor set and the second sensor set can consist of members of a first type of sensor.
  • Sensor types can classified based on how the physical measurement taken (e.g., current generated in a piezo electric device under mechanical stress).
  • the sensor type is classified based on the mechanism used to achieve the measurement.
  • the first sensor set can consist of a first type of sensor and the second sensor set can consist of a second type of sensor.
  • the two sensor sets represent different types of sensors that use a different mechanism to arrive at some common measurement. For example, both satellite navigation systems and inertial navigation systems can provide an absolute position of the UAV 105 , but each uses a different mechanism to arrive at the position measurement.
  • the cluster circuit set 160 can perform cluster analysis on the sampled sensor disagreements.
  • diversity in sensor number e.g., the more sensors used
  • manufacturer e.g., multiple manufacturers are harder to infiltrate in order to compromise a device
  • type e.g., an inertial navigation system and a barometer to measure altitude have different attack vectors
  • an analysis of the disagreement signals produced by ascertaining sensor disagreements over time can reduce false positive alerts and provide greater confidence in identifying a malfunctioning sensor.
  • Normative testing can provide a profile of false disagreements between sensors. For example, suppose the disagreement signal is an exponential distribution with a fixed arrival rate (false disagreement rate). Thus, the disagreement signal will include disagreement indications between the two sensors at fixed intervals. If X i is the time of the ith false disagreement, then X i-N ⁇ X i is the time between ith false disagreement and the i+N false disagreement. When a sensor deviation occurs, the disagreement indications in the disagreement signal cluster. Thus, the interim arrival rate of the disagreement indications (i.e. X i-N ⁇ X i will tend to be short, or getting shorter over time.
  • performing the cluster analysis can include determining whether a predetermined number of disagreements occurred within a calculated period of time. This is effective because the shorter interim arrival times of the disagreement indications increase the number of indications in fixed time windows.
  • the calculated time period can be calculated via an inverse Gamma function with a probability parameter, a sample-size parameter, and a time-of-arrival parameter, the time-of-arrival parameter determined via measurement of a sensor system during normative testing. For example, for a given probability P, if X i-N ⁇ X i ⁇ T a cluster can identified, where
  • the calculated time period (e.g., time window) in which 10 disagreements is concerning is 542 units.
  • the alert circuit set 165 can provide a deviation indication in response to the cluster analysis resulting in disagreement density beyond a threshold.
  • the cluster analysis provides an indication when the threshold is met, such as a predefined number of disagreement indications within the calculated time period.
  • the deviation indication can be a variety of indications, such as an alarm, a disabling signal for the deviating sensor, initiation of an investigation into the source of the deviation (e.g., determining whether a false signal is manipulating the sensor), a log entry, etc.
  • the deviation indication can indicate which sensor of the plurality of sensors is the source of the deviation.
  • a relationship structure such as a decision tree
  • a relationship structure can be applied to pairs (or other combinations) of sensors in the plurality of sensors using disagreements in the dataset that provided the deviation indication. That is, time correlated disagreements that resulted in the deviation indication can be compared to determine which sensor disagrees with others while eliminating from consideration those sensors that agree with the others.
  • An example of such a decision tree is described below with respect to FIG. 3 .
  • Disparate sensors can be used to control a cyber-physical system and misbehaving sensors can be reliably detected and dealt with.
  • cyber-physical system builders and users can deploy critical systems with greater confidence in the ability of these systems to withstand attacks from malicious entities.
  • FIGS. 2A and 2B illustrates an example of a disagreement signal cluster analysis timeline 200 , according to an embodiment.
  • a normative period e.g., when the system is operating normally and is not under attack
  • disagreements between two sensors can still occur for a variety of reasons (e.g., differing calibration, error tolerances, measurement mechanism, etc.).
  • FIG. 2A illustrates a normative period of the timeline 200 , with disagreement indications S- 1 through S- 6 .
  • varying intervals can also be addressed because a cluster of such indications is not expected to occur during normative operation.
  • FIG. 2B illustrates a later period in the disagreement signal cluster analysis timeline 200 , starting at S ⁇ N and moving through S ⁇ (N+M).
  • region 205 indicates the calculated time period discussed above.
  • the increased density of the disagreements in the timeline 200 indicates that a deviation is occurring. If the number of disagreements with the time period 205 is greater than the number used to calculate the time period 205 (e.g., 10 as described above), then an alert, or other deviation indication can be produced.
  • FIG. 3 illustrates an example of a deviating sensor decision tree 300 , according to an embodiment.
  • a decision tree or other mechanism can be used to determine which sensor of a plurality of sensors is causing problems.
  • the decision tree 300 applies to a system with two inertial navigation system sensors and two satellite navigation system sensors.
  • the disagreements discussed below are determined after the cluster analysis results in a disagreement indication.
  • the decision 305 it is tested whether or not the two inertial navigation system sensors agree with each other (i.e., that they did not disagree). If the two inertial navigation system sensors do agree with each other, then decision 310 determines whether the two satellite navigation system sensors agree with each other. If decision 310 is also affirmative, it can be concluded that the intra type checks indicate that no problem exists. However to address a sensor type attack (e.g., an attack effective across the entire sensor type, such as a global position system spoof attack), the first satellite navigation system sensor is compared to the first inertial navigation system sensor are decision 315 . If decision 315 indicates an agreement, then no alarm 320 is imitated. However, if the decision 315 indicates a disagreement, the alarm 325 can be initiated and indicate a spoofing attack on the satellite navigation system sensors.
  • a sensor type attack e.g., an attack effective across the entire sensor type, such as a global position system spoof attack
  • a disagreement between the satellite navigation system sensors where the inertial navigation sensors agree indicates that one of the satellite navigation sensors is deviating.
  • the first satellite navigation sensor can be tested for agreement with a trusted sensor (e.g., the first inertial navigation sensor) at decision 330 . If the first satellite navigation sensor does not agree with the first inertial navigation system sensor, then the alarm 340 can indicate that the first satellite navigation system sensor is the offending sensor. Otherwise, the alarm 335 can indicate that the second navigation system sensor is the offending sensor.
  • the satellite navigation sensors can be tested for agreement. It is noted that if these sensors do not agree, the decision tree cannot make a determination because either of the inertial navigation sensors is an offender and either of the satellite navigation system sensors is an offender, and thus there is no trusted sensor than can be used to determine which of each type of sensor is an offender. However, if the two satellite navigation sensors do agree with each other, each can be considered a trusted sensor—assuming that an attacker could not simultaneously perform a spoof attack and compromise an inertial navigation system sensor—to test the first inertial navigation sensor at decision 350 . If the first inertial navigation sensor disagrees with a satellite navigation system sensor, then the alarm 355 can indicate that the first inertial navigation sensor is the offending sensor. Otherwise, alarm 360 can indicate that the second inertial navigation sensor is the offending sensor.
  • the decision tree 300 performs a trust analyses on some sensors that can then be used to test other sensors. Such relationships can be exploited in other sensor arrangements. Moreover, some sensors may be more impervious to attack, and thus tested first to establish a trusted sensor set early in the process, such as the initial testing of the inertial navigation system sensors at decision 305 . In an case, the number of sensors tested can be increased, and the specific order of testing can be varied as long as the final result is determinative of an offending set of sensors.
  • FIG. 4 illustrates an example of a method 400 for cyber-physical defense, according to an embodiment.
  • the operations of the method 400 are implemented in computing hardware or carried out via computing hardware instructed by software.
  • Example components are described above with respect to FIG. 1 and below with respect to FIG. 5 .
  • sensor data for plurality of sensors can be monitored.
  • the plurality of sensors can include a first set of sensors (e.g., consisting of a first single type such as satellite navigation systems) with a cardinality greater than one and a second set of sensors (e.g., consisting of a second single type such as inertial navigation systems) also with a cardinality greater than one.
  • a disagreement signal can be created (e.g., generated) by calculating time correlated disagreements between sensors in the first set of sensors, between sensors in the second set of sensors, and between sensors in the first set of sensors and the second set of sensors.
  • intra-set disagreements for both sets of sensors as well as inter-set disagreements between the two sets are determined.
  • Sensor disagreements can be determined via a statistical analysis of a common output by two sensors.
  • residuals between two sensors can be computed and subjected to the statistical analysis.
  • normality of the residuals over time can be used to ascertain whether a disagreement exists.
  • the common output may be derived, such as a position output by an inertial navigation system.
  • disagreements between sensors of the first set of sensors can be calculated by measuring the normality of the residuals of satellite measurements between two sensors.
  • disagreements between sensors of the second set of sensors can be calculated by measuring the normality of the residuals of acceleration between two sensors.
  • disagreements between sensors of the first set of sensors and the second set of sensors can be calculated by measuring the normality of the residuals of position between a sensor in the first set of sensors and a sensor in the second set of sensors.
  • the disagreement signal can be sampled and a determination that the sampled disagreement signal has an interim arrival rate below a threshold can be made.
  • a threshold is a time period with magnitude determined via an inverse Gamma function with a probability parameter, a sample-size parameter, and a time-of-arrival parameter, the time-of-arrival parameter determined via measurement of a sensor system during normative testing.
  • Such a threshold illustrates a period in which disagreement density is beyond that of the system when it is not under attack.
  • an alarm can be provided in response to determining that the sampled disagreement signal has an interim arrival rate below a threshold.
  • the alarm can include identification of a sensor in the plurality of sensors deemed to be compromised. As described above with respect to FIGS. 1 and 3 , different sensor pairing structures can be used to make this identification.
  • the sensor is deemed to be compromised when it disagrees with other sensors in the plurality of sensors and the other sensors in the plurality of sensors agree with each other. That is, an identification of a sensor that disagrees with one or more other sensors where those other sensors agree with each other.
  • FIG. 5 illustrates a block diagram of an example machine 500 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
  • the machine 500 may operate as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine 500 may operate in the capacity of a server machine, a client machine, or both in server-client network environments.
  • the machine 500 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment.
  • P2P peer-to-peer
  • the machine 500 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • STB set-top box
  • PDA personal digital assistant
  • mobile telephone a web appliance
  • network router, switch or bridge or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
  • SaaS software as a service
  • Circuit sets are a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuit set membership may be flexible over time and underlying hardware variability. Circuit sets include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired).
  • the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • the instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation.
  • the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating.
  • any of the physical components may be used in more than one member of more than one circuit set.
  • execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
  • Machine 500 may include a hardware processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 504 and a static memory 506 , some or all of which may communicate with each other via an interlink (e.g., bus) 508 .
  • the machine 500 may further include a display unit 510 , an alphanumeric input device 512 (e.g., a keyboard), and a user interface (UI) navigation device 514 (e.g., a mouse).
  • the display unit 510 , input device 512 and UI navigation device 514 may be a touch screen display.
  • the machine 500 may additionally include a storage device (e.g., drive unit) 516 , a signal generation device 518 (e.g., a speaker), a network interface device 520 , and one or more sensors 521 , such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • the machine 500 may include an output controller 528 , such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
  • a serial e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
  • USB universal serial bus
  • the storage device 516 may include a machine readable medium 522 on which is stored one or more sets of data structures or instructions 524 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein.
  • the instructions 524 may also reside, completely or at least partially, within the main memory 504 , within static memory 506 , or within the hardware processor 502 during execution thereof by the machine 500 .
  • one or any combination of the hardware processor 502 , the main memory 504 , the static memory 506 , or the storage device 516 may constitute machine readable media.
  • machine readable medium 522 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 524 .
  • machine readable medium may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 524 .
  • machine readable medium may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 500 and that cause the machine 500 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.
  • Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media.
  • a massed machine readable medium comprises a machine readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals.
  • massed machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • non-volatile memory such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices
  • EPROM Electrically Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., electrically Erasable Programmable Read-Only Memory (EEPROM)
  • EPROM Electrically Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., electrical
  • the instructions 524 may further be transmitted or received over a communications network 526 using a transmission medium via the network interface device 520 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.).
  • transfer protocols e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.
  • Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others.
  • the network interface device 520 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 526 .
  • the network interface device 520 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques.
  • SIMO single-input multiple-output
  • MIMO multiple-input multiple-output
  • MISO multiple-input single-output
  • transmission medium shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 500 , and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
  • the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.”
  • the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Signal Processing (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Virology (AREA)

Abstract

System and techniques for cyber-physical system defense are described herein. Sensor disagreements between a plurality of sensors over time can be sampled. Cluster analysis on the sampled sensor disagreements can be performed. A deviation indication can be provided in response to the cluster analysis resulting in disagreement density beyond a threshold.

Description

    CLAIM OF PRIORITY
  • This patent application is a continuation of, and claims the benefit of priority, under 35 U.S.C. § 120, to U.S. patent application Ser. No. 14/660,278, titled “CYBER-PHYSICAL SYSTEM DEFENSE” and filed on Mar. 17, 2015, which claim the benefit of priority, under 35 U.S.C. § 119, to U.S. Provisional Application Ser. No. 61/955,669, titled “CLOUD BASED SYSTEM AWARE CYBER SECURITY AND RELATED METHODS THEREOF” and filed on Mar. 19, 2014, and also claims priority to U.S. Provisional Application Ser. No. 62/075,179, titled “SYSTEM AWARE CYBER SECURITY AND RELATED METHODS THEREOF” and filed on Nov. 4, 2014, the entirety of all are hereby incorporated by reference herein.
  • TECHNICAL FIELD
  • Embodiments described herein generally relate to system security and more specifically to cyber-physical system defense.
  • BACKGROUND
  • Cyber-physical systems combine computational, communication, sensory and control capabilities to monitor and regulate physical domain processes. Cyber-physical systems broadly focus on monitoring and controlling a physical process, and may include capabilities to: sense the physical world (e.g., the position of a valve controlling a tank filling process); make decisions (e.g., whether it is necessary to open or close the valve); and perform actions in physical world (e.g. open or close valve to maintain tank fluid level). Cyber-physical systems are becoming increasingly prevalent, filing roles in the civilian (e.g., power grid, public utility services, financial infrastructure, etc.) and defense (e.g., search and rescue missions and command, control, and conquer (C3) systems) spaces.
  • Cyber-physical systems are becoming increasingly accessible to attackers via increased network access to communication with control rooms, command and control stations, other computer based systems and networks such as the Internet. Examples of cyber-physical systems include transportation networks, unmanned aerial vehicles (UAV's), nuclear power generation, electric power distribution networks, water and gas distribution networks, and advanced communication systems. Current technology has often introduced the capability of integrating information from numerous instrumentation and control systems and transmitting the information to operations personnel in a timely manner.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
  • FIG. 1 is a block diagram of an example of an environment including a system for cyber-physical defense, according to an embodiment.
  • FIGS. 2A and 2B illustrates an example of a disagreement signal cluster analysis timeline, according to an embodiment.
  • FIG. 3 illustrates an example of a deviating sensor decision tree, according to an embodiment.
  • FIG. 4 illustrates an example of a method for cyber-physical defense, according to an embodiment.
  • FIG. 5 is a block diagram illustrating an example of a machine upon which one or more embodiments may be implemented.
  • DETAILED DESCRIPTION
  • Due to the increased vulnerability of cyber-physical systems given greater remote access as well as the reliance of these automated systems on control systems without human input, security is an important consideration. Traditionally, security is implemented via perimeter defense—such as such as firewalls, intrusion detection mechanisms, anti-viral signature software, encryption, and advanced user authentication—designed to prevent an attacker from gaining control of the cyber-physical system. While the application of perimeter security technologies has been used to address attacks, the rate of successful attacks against critical infrastructures continues to be increasingly problematic. Furthermore, the trend in adversarial attacks is moving toward well-formed coordinated multi-vector attacks that compromise the system in such a way that detection and identification is challenging for perimeter security solutions. Furthermore, an asymmetrical conflict arises where defending against attacks is expensive while actually performing attacks is becomingly increasingly inexpensive. That is, the attacker may take time to probe a defense perimeter, identify a weak point, and exploit it while the defender must spend inordinate resources to identify and fixe weak points ahead of time while being giving very little time to address an attack on an overlooked weak point.
  • To address the problem of cyber-physical system defense given above, the behavior of multiple redundant (in both type and number) sensors can be used to produce a robust platform for the cyber-physical system control. Further, disagreements between the sensors can be analyzed to identify periods in which an attack is likely, such as when noted disagreements cluster together. Such a cluster analysis addresses the practical reality that disparate sensors may not agree at all times, however, are likely to reflect a pattern of disagreement during normative (e.g., normal and not under attack) operations. Further, to address whether or not a single disagreement has occurred between two disparate sensors, the data from these sensors can be statistically analyzed to determine whether it deviates from a known probability distribution (e.g., a random normal distribution). Moreover, individual sensor disagreements can be analyzed via a logical pairing structure (e.g., a decision tree) to identify the component suspected of being compromised in an attack. By using these techniques, a cyber-physical system includes robust and ongoing defense of attacks that may have breached other defense mechanisms, such as perimeter security mechanisms.
  • FIG. 1 is a block diagram of an example of an environment 100 including a system 145 for cyber-physical defense, according to an embodiment. The following discussion generally uses the example of a simplified UAV navigation control system. However, the discussed techniques are applicable to other cyber-physical control systems as long as a known probability distribution between sensor readings can be established.
  • The environment 100 includes a UAV 105 with a control system 110. The control system 110 can include a navigation component 130, a sensor array 115, and a system defense component 145. The navigation component 103 can include a position estimate module 135 that takes an inertial navigation system output and feedback from a Kalman filter 140 as inputs and produces a navigation signal that is combined with a satellite navigation system (e.g., global position system, GLONASS, Galileo, Beidou, etc.) output and used in the Kalman filter 140 to produce navigation resolution for the UAV 105. The navigation resolution can be used to determined which actuators of the UAV to manipulate and how they are to be manipulated to pilot the IAV 105. The sensor array 115 can include a plurality of sensors, such as inertial navigation sensors INS-1 120A and INS-2 120B and satellite navigation system sensors SNS-1 125A and SNS-2 125B. In example, the plurality of sensors in the sensor array 115 can include a number of subsets, such as subsets based on homogeneous type (e.g., inertial navigation system as a first type and satellite navigation system as a second type).
  • The system defense component 145 can include a sensor interface 150. The sensor interface 150 includes hardware to communicatively receive or retrieve data from the sensors 120-125 in the sensor array 115. The sensor interface 150 also includes hardware to provide the sensor data to other components of the system defense component, such as the sampling circuit set 155, the cluster circuit set 160, or the alert circuit set 165.
  • The sampling circuit set 155 can sample sensor disagreements between the plurality of sensors over time. In an example, to sample sensor disagreements between the plurality of sensors over time can include obtaining a first data set for a first sensor set, obtain a second data set for a second sensor set determine a disagreement between the first data set and the second data set. Such result comparisons for sensor disagreements can be performed in a number of ways. For example, the result of a first sensor can be directly compared with the result of a second sensor and when the results do not match, either exactly or within a threshold, a disagreement is registered. However, sensor diversity in sourcing (e.g., who makes the sensor) and operation (e.g., what hardware is used) can make it more difficult to attack. This diversity may include diverse operating characteristics, making direct result comparison a less reliable indication of actual sensor disagreement. This issue may be compounded when disagreements between different types of sensors are sought.
  • In an example, to address the direct comparison issues mentioned above, determining the disagreement between the first data set and the second data set can include combining the first data set with the second data set to create a measurement data set, calculating a set of residuals for the measurement data set, and determining that a mean for the residuals is beyond a threshold in relation to an expected mean for the residuals. Thus, a statistical model is applied to the differences between the two data sets to ascertain whether they are behaving as they should. Below is a discussion of the ability to use the mean of the residuals to ascertain undue influence in one or another sensor.
  • First, the state of the UAV 105 navigation can be modeled for multiple satellite navigation system sensors 125 and inertial navigation system sensors 125 as a linear time-invariant stochastic system with Gaussian noise (which accounts for modeling errors, uncertainties, or system external perturbations). Such as system can be expressed in the manner illustrated below:
  • Let x be the state of the UAV 105, where:
  • x1:=x-axis coordinate position
  • x2:=x-axis component of velocity
  • x3:=y-axis coordinate position
  • x4:=y-axis component of velocity
  • Inertial Navigation System components (M units):
  • x a ( 1 ) ( k + 1 ) = Ax a ( k ) + Bu a ( k ) + B c a c ( k ) + Tw ( 1 ) ( k ) x a ( 2 ) ( k + 1 ) = Ax a ( k ) + Bu a ( k ) + B c a c ( k ) + Tw ( 2 ) ( k ) x a ( M ) ( k + 1 ) = Ax a ( k ) + Bu a ( k ) + B c a c ( k ) + Tw ( M ) ( k )
  • Satellite Navigation System components (N units):
  • z a ( 1 ) ( k ) = C 1 x a ( k ) + B o a o ( k ) + v ( 1 ) ( k ) z a ( 2 ) ( k ) = C 1 x a ( k ) + B o a o ( k ) + v ( 2 ) ( k ) z a ( N ) ( k ) = C 1 x a ( k ) + B o a o ( k ) + v ( N ) ( k )
  • where Xa(k)ϵ
    Figure US20190025435A1-20190124-P00001
    l, u(k)ϵ
    Figure US20190025435A1-20190124-P00001
    p, za(k)ϵ
    Figure US20190025435A1-20190124-P00001
    t are the system state, inputs of the inertial navigations system units, and the measurement of the satellite navigation units, and where wj(k)ϵ
    Figure US20190025435A1-20190124-P00001
    q(k), vi(k)ϵ
    Figure US20190025435A1-20190124-P00001
    t Mare process and measurement noise. The w(k) and v(k) components are Gaussian white noise of the inertial navigation system and satellite navigation system measurements respectively, with constant covariance matrices Q and R. Further, let
  • A = [ 1 T S 0 0 0 1 0 0 0 0 1 T S 0 0 0 1 ] B = [ T S 2 2 0 T S 0 0 T S 2 2 0 T S ] C = [ 1 0 0 0 0 0 1 0 ] T = [ T S 2 2 T S T S 2 2 T S ]
  • where Bo, Bc are the attack matrices and ao(k), ac(k) are persistent, linear deception attacks against the satellite navigation systems and inertial navigation systems at time k respectively.
  • Given this model of the system, similarity analysis between these components can be carried out as with the following component specific similarity analyses.
  • Similarity between two inertial navigation system components can look a velocity or acceleration residuals between the two components. Selection of particular sensor outputs for comparison, rather than all components, can alleviate identified senor issues, such as compounding drift in inertial navigation sensors. The residual of the acceleration measurement for the two inertial navigation system components is:
  • r 1 ( k ) = ( Bu a ( 1 ) ( k ) + B c a c ( 1 ) + Tw ( 1 ) ( k ) ) - ( Bu a ( 2 ) ( k ) + B c a c ( 2 ) + Tw ( 2 ) ( k ) ) = B ( u a ( 1 ) ( k ) - u a ( 2 ) ( k ) ) + T ( w ( 1 ) ( k ) - w ( 2 ) ( k ) ) + α c ( k )
  • where ac(k)=B(ac (1)(k)−ac (2)(k)). Given this, because w(i)(k) is a zero-mean Gaussian, and because ua (1)(k)=ua (2)(k), we have:

  • N(0,∂T′(Q (1) +Q (2))∂T)˜∂T(w (1)(k)−w (2)(k))
  • If αc(k)≠0, then r1(k) loses its non-zero Gaussian characteristic. Thus, a valid test for sensor deviation (e.g., based on a security intrusion) is to test the non-zero mean normality of the residuals, which can be performed with the compound scalar test. Since r(k) is a bivariate standard normally distributed random variable, let

  • Figure US20190025435A1-20190124-P00002
    1(k)=r 1(k)T((C T ∂TCr 1 −1(C T ∂TC))r 1(k)
  • be the sum of squares of the residual with two degrees of freedom between the two inertial navigation system acceleration measurements with covariance matrix

  • Σr 1 =Q (1) +Q (2)
  • With the application of the compound scalar test to assess the normality of the residuals, the hypothesis test becomes:

  • Figure US20190025435A1-20190124-P00003
    0 : X(
    Figure US20190025435A1-20190124-P00002
    1(k))<threshold

  • Figure US20190025435A1-20190124-P00003
    1 : X(
    Figure US20190025435A1-20190124-P00002
    1(k))>threshold
  • where
    Figure US20190025435A1-20190124-P00003
    1 signifies a disagreement at time k. In an example, the threshold is 0.99.
  • As demonstrated above, the differences between the two sensor readings, at a point in time—e.g., within a tolerance threshold such that two readings are considered at the same point in time, the tolerance threshold providing sufficient resolution to produce a meaningful result given the application—can be tested for conformance to a probability distribution to determine disagreement between the sensors. Although the specifics of some terms vary, the application of the above technique operates in the same way for disagreement (e.g., similarity) testing of satellite navigation system sensors and between inertial navigation system sensors and satellite navigation system sensors because the residuals are Gaussian distributed random variables.
  • In an example, between two satellite navigation systems, the residuals of the satellite navigation system measurements (represented above as za (N)) can be used. For example, for two satellite navigation sensors 1 and 2, the residual is
  • r 2 ( k ) = z a ( 1 ) ( k ) - z a ( 2 ) = ( Cx a ( k ) + B o a o ( 1 ) ( k ) + v ( 1 ) ( k ) ) - ( Cx a ( k ) + B o a o ( 1 ) ( k ) + v ( 1 ) ( k ) ) = ( v ( 1 ) ( k ) - v ( 2 ) ( k ) ) + α 0 ( k )
  • where a0(k)=B0(a0 (1)(k)−a0 (2)(k)). Because v(i) is a zero-mean Gaussian random variable, we have:

  • N(0,R (1) +R (2))˜(v (1)(k)−v (2)(k))
  • Again, if α0(k)≠0, then r1 loses its non-zero Gaussian characteristics. Thus, the scalar compound test can be used to assess a deviation. Let X be the sum of squares of the residual:

  • Figure US20190025435A1-20190124-P00002
    2(k)=r 2(k)Tr 2 −1)r 2(k)

  • Σr 2 =R (1) +R (2)
  • and the compound scalar test hypothesis becomes:

  • Figure US20190025435A1-20190124-P00003
    0 :X(
    Figure US20190025435A1-20190124-P00002
    2(k))<threshold

  • Figure US20190025435A1-20190124-P00003
    1 : X(
    Figure US20190025435A1-20190124-P00002
    2(k))>threshold
  • where
    Figure US20190025435A1-20190124-P00003
    1 signifies a disagreement at time k. In an example, the threshold is 0.99.
  • In an example, between a satellite navigation system sensor and an inertial navigation system sensor, the position residuals (respectively represented above as za (N) and xa (M)) can be used. For example, the residual of the inertial navigation system sensor and the satellite navigation sensor is:

  • r 3(k)=x a (1)(k)−z a (1)(k)=CAx a (1)(k−1)+CBu(k−1)0Cx a(k)+[B 0 a 0 (1)(k)−B c a c (1)(k)]+[C∂Tw (1)(k−1)−v (1)(k)]
  • And has a non-zero mean Gaussian characteristic. Further, assume that,

  • CAx a (1)(k−1)+CBu(k−1)=Cx a(k)

  • and, absent an attack,

  • B 0 a 0 (1)(k)+B c a c (1)(k)=0
  • then r3 (k) is a Gaussian distributed random variable, with covariance

  • Σr 3 =c∂T T C T Q (1) C T ∂TC+R (1)

  • and sum of squares is

  • Figure US20190025435A1-20190124-P00002
    3(k)=r 3(k)TΣr 3 −1 r 3(k)
  • then the compound scalar test hypothesis becomes:

  • Figure US20190025435A1-20190124-P00003
    0 : X(
    Figure US20190025435A1-20190124-P00002
    3(k))<threshold

  • Figure US20190025435A1-20190124-P00003
    1 :X(
    Figure US20190025435A1-20190124-P00002
    3(k))>threshold
  • where
    Figure US20190025435A1-20190124-P00003
    1 signifies a disagreement at time k. In an example, the threshold is 0.99.
  • These principles or residual normality testing are similarly applicable to disagreement testing of other sensors, such as rotational sensors, altitude tensors, thermometers, etc. Thus truly disparate sensors can be tested for agreement, providing a robust and secure sensing platform for cyber-physical systems.
  • Returning to the more general case, in an example, the first sensor set and the second sensor set can consist of members of a first type of sensor. Thus, the same type of sensor is compared against each other. Sensor types can classified based on how the physical measurement taken (e.g., current generated in a piezo electric device under mechanical stress). In an example, the sensor type is classified based on the mechanism used to achieve the measurement. In an example, the first sensor set can consist of a first type of sensor and the second sensor set can consist of a second type of sensor. Thus, in this example, the two sensor sets represent different types of sensors that use a different mechanism to arrive at some common measurement. For example, both satellite navigation systems and inertial navigation systems can provide an absolute position of the UAV 105, but each uses a different mechanism to arrive at the position measurement.
  • The cluster circuit set 160 can perform cluster analysis on the sampled sensor disagreements. As noted above, diversity in sensor number (e.g., the more sensors used), manufacturer (e.g., multiple manufacturers are harder to infiltrate in order to compromise a device), and type (e.g., an inertial navigation system and a barometer to measure altitude have different attack vectors) tend to lead to increased robustness of the system. However, such variety can lead to reduced uniformity in measurement results at any given moment. To address this issue, an analysis of the disagreement signals produced by ascertaining sensor disagreements over time can reduce false positive alerts and provide greater confidence in identifying a malfunctioning sensor.
  • Normative testing can provide a profile of false disagreements between sensors. For example, suppose the disagreement signal is an exponential distribution with a fixed arrival rate (false disagreement rate). Thus, the disagreement signal will include disagreement indications between the two sensors at fixed intervals. If Xi is the time of the ith false disagreement, then Xi-N−Xi is the time between ith false disagreement and the i+N false disagreement. When a sensor deviation occurs, the disagreement indications in the disagreement signal cluster. Thus, the interim arrival rate of the disagreement indications (i.e. Xi-N−Xi will tend to be short, or getting shorter over time.
  • In an example, performing the cluster analysis can include determining whether a predetermined number of disagreements occurred within a calculated period of time. This is effective because the shorter interim arrival times of the disagreement indications increase the number of indications in fixed time windows. In an example, the calculated time period can be calculated via an inverse Gamma function with a probability parameter, a sample-size parameter, and a time-of-arrival parameter, the time-of-arrival parameter determined via measurement of a sensor system during normative testing. For example, for a given probability P, if Xi-N−Xi<T a cluster can identified, where
  • T = InvGamma ( P , N , 1 R )
  • and
      • N=number of disagreement indications
      • R=time of arrival of each disagreement indication
  • Consider the values for the given parameters: P=0.05, N=10, and R=rate of false alarms during normative testing, then T=542. Thus, in this scenario, the calculated time period (e.g., time window) in which 10 disagreements is concerning is 542 units.
  • The alert circuit set 165 can provide a deviation indication in response to the cluster analysis resulting in disagreement density beyond a threshold. As noted above, the cluster analysis provides an indication when the threshold is met, such as a predefined number of disagreement indications within the calculated time period. The deviation indication can be a variety of indications, such as an alarm, a disabling signal for the deviating sensor, initiation of an investigation into the source of the deviation (e.g., determining whether a false signal is manipulating the sensor), a log entry, etc. In an example, the deviation indication can indicate which sensor of the plurality of sensors is the source of the deviation. In an example, a relationship structure, such as a decision tree, can be applied to pairs (or other combinations) of sensors in the plurality of sensors using disagreements in the dataset that provided the deviation indication. That is, time correlated disagreements that resulted in the deviation indication can be compared to determine which sensor disagrees with others while eliminating from consideration those sensors that agree with the others. An example of such a decision tree is described below with respect to FIG. 3.
  • By applying the components and techniques described herein, the asynchronous nature of system attack and defense in perimeter defense is addressed. Disparate sensors can be used to control a cyber-physical system and misbehaving sensors can be reliably detected and dealt with. Thus, cyber-physical system builders and users can deploy critical systems with greater confidence in the ability of these systems to withstand attacks from malicious entities.
  • FIGS. 2A and 2B illustrates an example of a disagreement signal cluster analysis timeline 200, according to an embodiment. As described above, during a normative period (e.g., when the system is operating normally and is not under attack) disagreements between two sensors can still occur for a variety of reasons (e.g., differing calibration, error tolerances, measurement mechanism, etc.). FIG. 2A illustrates a normative period of the timeline 200, with disagreement indications S-1 through S-6. Although illustrated having a fixed interim arrival times, varying intervals can also be addressed because a cluster of such indications is not expected to occur during normative operation.
  • FIG. 2B illustrates a later period in the disagreement signal cluster analysis timeline 200, starting at S−N and moving through S−(N+M). In this portion of the time line, region 205 indicates the calculated time period discussed above. The increased density of the disagreements in the timeline 200 indicates that a deviation is occurring. If the number of disagreements with the time period 205 is greater than the number used to calculate the time period 205 (e.g., 10 as described above), then an alert, or other deviation indication can be produced.
  • FIG. 3 illustrates an example of a deviating sensor decision tree 300, according to an embodiment. As described above, a decision tree, or other mechanism can be used to determine which sensor of a plurality of sensors is causing problems. In the example of the decision tree 300 applies to a system with two inertial navigation system sensors and two satellite navigation system sensors. In an example, the disagreements discussed below are determined after the cluster analysis results in a disagreement indication.
  • At the decision 305, it is tested whether or not the two inertial navigation system sensors agree with each other (i.e., that they did not disagree). If the two inertial navigation system sensors do agree with each other, then decision 310 determines whether the two satellite navigation system sensors agree with each other. If decision 310 is also affirmative, it can be concluded that the intra type checks indicate that no problem exists. However to address a sensor type attack (e.g., an attack effective across the entire sensor type, such as a global position system spoof attack), the first satellite navigation system sensor is compared to the first inertial navigation system sensor are decision 315. If decision 315 indicates an agreement, then no alarm 320 is imitated. However, if the decision 315 indicates a disagreement, the alarm 325 can be initiated and indicate a spoofing attack on the satellite navigation system sensors.
  • Moving back to the decision 310, a disagreement between the satellite navigation system sensors where the inertial navigation sensors agree (at decision 305) indicates that one of the satellite navigation sensors is deviating. To determine which of the two satellite navigation sensors is the offending sensor, the first satellite navigation sensor can be tested for agreement with a trusted sensor (e.g., the first inertial navigation sensor) at decision 330. If the first satellite navigation sensor does not agree with the first inertial navigation system sensor, then the alarm 340 can indicate that the first satellite navigation system sensor is the offending sensor. Otherwise, the alarm 335 can indicate that the second navigation system sensor is the offending sensor.
  • Moving back to the decision 305, if the inertial navigation sensors disagree, one of them is the offending sensor. At the decision 345, the satellite navigation sensors can be tested for agreement. It is noted that if these sensors do not agree, the decision tree cannot make a determination because either of the inertial navigation sensors is an offender and either of the satellite navigation system sensors is an offender, and thus there is no trusted sensor than can be used to determine which of each type of sensor is an offender. However, if the two satellite navigation sensors do agree with each other, each can be considered a trusted sensor—assuming that an attacker could not simultaneously perform a spoof attack and compromise an inertial navigation system sensor—to test the first inertial navigation sensor at decision 350. If the first inertial navigation sensor disagrees with a satellite navigation system sensor, then the alarm 355 can indicate that the first inertial navigation sensor is the offending sensor. Otherwise, alarm 360 can indicate that the second inertial navigation sensor is the offending sensor.
  • The decision tree 300 performs a trust analyses on some sensors that can then be used to test other sensors. Such relationships can be exploited in other sensor arrangements. Moreover, some sensors may be more impervious to attack, and thus tested first to establish a trusted sensor set early in the process, such as the initial testing of the inertial navigation system sensors at decision 305. In an case, the number of sensors tested can be increased, and the specific order of testing can be varied as long as the final result is determinative of an offending set of sensors.
  • FIG. 4 illustrates an example of a method 400 for cyber-physical defense, according to an embodiment. The operations of the method 400 are implemented in computing hardware or carried out via computing hardware instructed by software. Example components are described above with respect to FIG. 1 and below with respect to FIG. 5.
  • At operation 405, sensor data for plurality of sensors can be monitored. The plurality of sensors can include a first set of sensors (e.g., consisting of a first single type such as satellite navigation systems) with a cardinality greater than one and a second set of sensors (e.g., consisting of a second single type such as inertial navigation systems) also with a cardinality greater than one.
  • At operation 410, a disagreement signal can be created (e.g., generated) by calculating time correlated disagreements between sensors in the first set of sensors, between sensors in the second set of sensors, and between sensors in the first set of sensors and the second set of sensors. Thus, intra-set disagreements for both sets of sensors as well as inter-set disagreements between the two sets are determined.
  • Sensor disagreements can be determined via a statistical analysis of a common output by two sensors. In an example, residuals between two sensors can be computed and subjected to the statistical analysis. In an example, normality of the residuals over time can be used to ascertain whether a disagreement exists. In an example, the common output may be derived, such as a position output by an inertial navigation system. In an example, disagreements between sensors of the first set of sensors can be calculated by measuring the normality of the residuals of satellite measurements between two sensors. In an example, disagreements between sensors of the second set of sensors can be calculated by measuring the normality of the residuals of acceleration between two sensors. In an example, disagreements between sensors of the first set of sensors and the second set of sensors can be calculated by measuring the normality of the residuals of position between a sensor in the first set of sensors and a sensor in the second set of sensors.
  • At operation 415, the disagreement signal can be sampled and a determination that the sampled disagreement signal has an interim arrival rate below a threshold can be made. As noted above, disparate sensors may disagree at times by virtue of differing operating parameters, quality, or other factors. However, when one sensor is being manipulated, the frequency of disagreements rises, resulting in a clustering of disagreements in time (e.g., as illustrated in FIGS. 2A and 2B). In an example, the threshold is a time period with magnitude determined via an inverse Gamma function with a probability parameter, a sample-size parameter, and a time-of-arrival parameter, the time-of-arrival parameter determined via measurement of a sensor system during normative testing. Such a threshold illustrates a period in which disagreement density is beyond that of the system when it is not under attack.
  • At operation 420, an alarm can be provided in response to determining that the sampled disagreement signal has an interim arrival rate below a threshold. In an example, the alarm can include identification of a sensor in the plurality of sensors deemed to be compromised. As described above with respect to FIGS. 1 and 3, different sensor pairing structures can be used to make this identification. In an example, the sensor is deemed to be compromised when it disagrees with other sensors in the plurality of sensors and the other sensors in the plurality of sensors agree with each other. That is, an identification of a sensor that disagrees with one or more other sensors where those other sensors agree with each other.
  • FIG. 5 illustrates a block diagram of an example machine 500 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. In alternative embodiments, the machine 500 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 500 may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine 500 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 500 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
  • Examples, as described herein, may include, or may operate by, logic or a number of components, or mechanisms. Circuit sets are a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuit set membership may be flexible over time and underlying hardware variability. Circuit sets include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuit set. For example, under operation, execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
  • Machine (e.g., computer system) 500 may include a hardware processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 504 and a static memory 506, some or all of which may communicate with each other via an interlink (e.g., bus) 508. The machine 500 may further include a display unit 510, an alphanumeric input device 512 (e.g., a keyboard), and a user interface (UI) navigation device 514 (e.g., a mouse). In an example, the display unit 510, input device 512 and UI navigation device 514 may be a touch screen display. The machine 500 may additionally include a storage device (e.g., drive unit) 516, a signal generation device 518 (e.g., a speaker), a network interface device 520, and one or more sensors 521, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 500 may include an output controller 528, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
  • The storage device 516 may include a machine readable medium 522 on which is stored one or more sets of data structures or instructions 524 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 524 may also reside, completely or at least partially, within the main memory 504, within static memory 506, or within the hardware processor 502 during execution thereof by the machine 500. In an example, one or any combination of the hardware processor 502, the main memory 504, the static memory 506, or the storage device 516 may constitute machine readable media.
  • While the machine readable medium 522 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 524.
  • The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 500 and that cause the machine 500 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine readable medium comprises a machine readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • The instructions 524 may further be transmitted or received over a communications network 526 using a transmission medium via the network interface device 520 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 520 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 526. In an example, the network interface device 520 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 500, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
  • Additional Notes
  • The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
  • All publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
  • In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
  • The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (1)

What is claimed is:
1. A system for cyber-physical system defense, the system comprising:
a sensor interface to receive sensor data from a plurality of sensors;
a sampling circuit set to sample sensor disagreements between the plurality of sensors over time;
a cluster circuit set to perform cluster analysis on the sampled sensor disagreements; and
an alert circuit set to provide a deviation indication in response to the cluster analysis resulting in disagreement density beyond a threshold.
US15/918,787 2014-03-19 2018-03-12 Cyber-physical system defense Abandoned US20190025435A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/918,787 US20190025435A1 (en) 2014-03-19 2018-03-12 Cyber-physical system defense

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201461955669P 2014-03-19 2014-03-19
US201462075179P 2014-11-04 2014-11-04
US14/660,278 US9942262B1 (en) 2014-03-19 2015-03-17 Cyber-physical system defense
US15/918,787 US20190025435A1 (en) 2014-03-19 2018-03-12 Cyber-physical system defense

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/660,278 Continuation US9942262B1 (en) 2014-03-19 2015-03-17 Cyber-physical system defense

Publications (1)

Publication Number Publication Date
US20190025435A1 true US20190025435A1 (en) 2019-01-24

Family

ID=61801346

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/660,278 Active 2035-05-14 US9942262B1 (en) 2014-03-19 2015-03-17 Cyber-physical system defense
US15/918,787 Abandoned US20190025435A1 (en) 2014-03-19 2018-03-12 Cyber-physical system defense

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/660,278 Active 2035-05-14 US9942262B1 (en) 2014-03-19 2015-03-17 Cyber-physical system defense

Country Status (1)

Country Link
US (2) US9942262B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200059394A1 (en) * 2018-08-16 2020-02-20 Siemens Aktiengesellschaft System for controlling and monitoring adaptive cyberphysical systems

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8874477B2 (en) 2005-10-04 2014-10-28 Steven Mark Hoffberg Multifactorial optimization system and method
US10511446B2 (en) * 2017-09-22 2019-12-17 Cisco Technology, Inc. Methods and apparatus for secure device pairing for secure network communication including cybersecurity
US11055447B2 (en) * 2018-05-28 2021-07-06 Tata Consultancy Services Limited Methods and systems for adaptive parameter sampling
US11544161B1 (en) * 2019-05-31 2023-01-03 Amazon Technologies, Inc. Identifying anomalous sensors

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7389354B1 (en) 2000-12-11 2008-06-17 Cisco Technology, Inc. Preventing HTTP server attacks
CN1237804C (en) 2001-04-19 2006-01-18 松下电器产业株式会社 License management system, license management device, relay device and terminal device
JP2003232888A (en) * 2001-12-07 2003-08-22 Global Nuclear Fuel-Japan Co Ltd Integrity confirmation inspection system and integrity confirmation method for transported object
US7448067B2 (en) 2002-09-30 2008-11-04 Intel Corporation Method and apparatus for enforcing network security policies
US9818136B1 (en) * 2003-02-05 2017-11-14 Steven M. Hoffberg System and method for determining contingent relevance
US20050076236A1 (en) 2003-10-03 2005-04-07 Bryan Stephenson Method and system for responding to network intrusions
FR2866423B1 (en) * 2004-02-13 2006-05-05 Thales Sa DEVICE FOR MONITORING THE INTEGRITY OF THE INFORMATION DELIVERED BY AN INS / GNSS HYBRID SYSTEM
JP4688472B2 (en) 2004-11-01 2011-05-25 株式会社エヌ・ティ・ティ・ドコモ Terminal control apparatus and terminal control method
US7757295B1 (en) 2005-02-09 2010-07-13 Lsi Corporation Methods and structure for serially controlled chip security
US8429630B2 (en) 2005-09-15 2013-04-23 Ca, Inc. Globally distributed utility computing cloud
US8874477B2 (en) * 2005-10-04 2014-10-28 Steven Mark Hoffberg Multifactorial optimization system and method
US8908708B2 (en) 2005-10-28 2014-12-09 Hewlett-Packard Development Company, L.P. Secure method and apparatus for enabling the provisioning of a shared service in a utility computing environment
US9407662B2 (en) 2005-12-29 2016-08-02 Nextlabs, Inc. Analyzing activity data of an information management system
US8732837B1 (en) 2006-09-27 2014-05-20 Bank Of America Corporation System and method for monitoring the security of computing resources
EP1975831A1 (en) 2007-03-27 2008-10-01 Thomson Licensing, Inc. Device and method for digital processing management of content so as to enable an imposed work flow
US8271642B1 (en) 2007-08-29 2012-09-18 Mcafee, Inc. System, method, and computer program product for isolating a device associated with at least potential data leakage activity, based on user input
US20090077004A1 (en) 2007-09-17 2009-03-19 Anglin Matthew J Data Recovery in a Hierarchical Data Storage System
US8230070B2 (en) 2007-11-09 2012-07-24 Manjrasoft Pty. Ltd. System and method for grid and cloud computing
WO2009111799A2 (en) 2008-03-07 2009-09-11 3Tera, Inc. Globally distributed utility computing cloud
US8620889B2 (en) 2008-03-27 2013-12-31 Microsoft Corporation Managing data transfer between endpoints in a distributed computing environment
JP5083042B2 (en) 2008-05-30 2012-11-28 富士通株式会社 Access control policy compliance check program
US8763071B2 (en) 2008-07-24 2014-06-24 Zscaler, Inc. Systems and methods for mobile application security classification and enforcement
WO2010011180A1 (en) 2008-07-25 2010-01-28 Resolvo Systems Pte Ltd Method and system for securing against leakage of source code
US20100124886A1 (en) * 2008-11-18 2010-05-20 Fordham Bradley S Network sensor, analyzer and enhancer
US8495735B1 (en) 2008-12-30 2013-07-23 Uab Research Foundation System and method for conducting a non-exact matching analysis on a phishing website
US8468597B1 (en) 2008-12-30 2013-06-18 Uab Research Foundation System and method for identifying a phishing website
US8977750B2 (en) 2009-02-24 2015-03-10 Red Hat, Inc. Extending security platforms to cloud-based networks
US8621553B2 (en) 2009-03-31 2013-12-31 Microsoft Corporation Model based security for cloud services
AU2010230857B2 (en) * 2009-04-01 2015-05-14 Icetana Pty Ltd Systems and methods for detecting anomalies from data
US8549628B2 (en) 2009-04-07 2013-10-01 Alcatel Lucent Method and apparatus to measure the security of a system, network, or application
US8732296B1 (en) 2009-05-06 2014-05-20 Mcafee, Inc. System, method, and computer program product for redirecting IRC traffic identified utilizing a port-independent algorithm and controlling IRC based malware
TW201139833A (en) 2010-05-13 2011-11-16 Hon Hai Prec Ind Co Ltd Device and method for controlling an electric rolling shutter
US9202225B2 (en) 2010-05-28 2015-12-01 Red Hat, Inc. Aggregate monitoring of utilization data for vendor products in cloud networks
US8638680B2 (en) * 2010-07-30 2014-01-28 Cisco Technology, Inc. Applying policies to a sensor network
US20120026938A1 (en) * 2010-07-30 2012-02-02 Cisco Technology, Inc. Applying Policies to a Sensor Network
IL210169A0 (en) 2010-12-22 2011-03-31 Yehuda Binder System and method for routing-based internet security
US9467421B2 (en) 2011-05-24 2016-10-11 Palo Alto Networks, Inc. Using DNS communications to filter domain names
US9323928B2 (en) 2011-06-01 2016-04-26 Mcafee, Inc. System and method for non-signature based detection of malicious processes
US8286250B1 (en) 2011-11-16 2012-10-09 Google Inc. Browser extension control flow graph construction for determining sensitive paths
US8365291B1 (en) 2011-11-16 2013-01-29 Google Inc. Browser extension control flow graph based taint tracking
US9152819B2 (en) 2011-12-30 2015-10-06 Intel Corporation Cloud based real time app privacy dashboard
US8713684B2 (en) 2012-02-24 2014-04-29 Appthority, Inc. Quantifying the risks of applications for mobile devices
US20130276124A1 (en) 2012-04-17 2013-10-17 Assurant, Inc. Systems, methods, apparatuses and computer program products for providing mobile device protection
CA2874395A1 (en) * 2012-05-24 2013-12-19 Douglas H. Lundy Threat detection system having multi-hop, wifi or cellular network arrangement of wireless detectors, sensors and sub-sensors that report data and location non-compliance, and enable related devices while blanketing a venue
CN105050868B (en) * 2012-10-17 2018-12-21 安全堡垒有限责任公司 Equipment for detecting and preventing the attack to the vehicles
US8505102B1 (en) 2013-01-14 2013-08-06 Google Inc. Detecting undesirable content
US20150382084A1 (en) * 2014-06-25 2015-12-31 Allied Telesis Holdings Kabushiki Kaisha Path determination of a sensor based detection system
US20150378574A1 (en) * 2014-06-25 2015-12-31 Allied Telesis Holdings Kabushiki Kaisha Graphical user interface of a sensor based detection system
US20150379765A1 (en) * 2014-06-25 2015-12-31 Allied Telesis Holdings Kabushiki Kaisha Graphical user interface for path determination of a sensor based detection system
US20150379853A1 (en) * 2014-06-25 2015-12-31 Allied Telesis Holdings Kabushiki Kaisha Method and system for sensor based messaging
PH12013000136A1 (en) * 2013-05-23 2015-01-21 De Antoni Ferdinand Evert Karoly A domain agnostic method and system for the capture, storage, and analysis of sensor readings
US20140361905A1 (en) * 2013-06-05 2014-12-11 Qualcomm Incorporated Context monitoring
US9693386B2 (en) * 2014-05-20 2017-06-27 Allied Telesis Holdings Kabushiki Kaisha Time chart for sensor based detection system
US11137490B2 (en) * 2014-09-16 2021-10-05 Teknologian Tutkimuskeskus Vtt Navigational aid with adaptive radar
US9697355B1 (en) 2015-06-17 2017-07-04 Mission Secure, Inc. Cyber security for physical systems

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200059394A1 (en) * 2018-08-16 2020-02-20 Siemens Aktiengesellschaft System for controlling and monitoring adaptive cyberphysical systems
US11252010B2 (en) * 2018-08-16 2022-02-15 Siemens Aktiengesellschaft System for controlling and monitoring adaptive cyberphysical systems

Also Published As

Publication number Publication date
US9942262B1 (en) 2018-04-10

Similar Documents

Publication Publication Date Title
US10334442B2 (en) Cooperative security in wireless sensor networks
US20190025435A1 (en) Cyber-physical system defense
TWI635466B (en) Multisensory change detection for internet of things domain
US20180013779A1 (en) Methods and apparatuses for integrity validation of remote devices using side-channel information in a power signature analysis
US20150332054A1 (en) Probabilistic cyber threat recognition and prediction
US10670648B2 (en) Systems and methods for determining whether a circuit is operating properly
CN111131283A (en) Malware detection system attack prevention
Ahmed et al. Noisense print: detecting data integrity attacks on sensor measurements using hardware-based fingerprints
Murad et al. Software testing techniques in iot
Tian et al. Data‐Driven and Low‐Sparsity False Data Injection Attacks in Smart Grid
US20200301892A1 (en) Three-dimensional file event representation
Yang et al. Map-based localization under adversarial attacks
Ahmed et al. NoiSense: Detecting data integrity attacks on sensor measurements using hardware based fingerprints
Olfat et al. Covariance-robust dynamic watermarking
CN111800427B (en) Internet of things equipment evaluation method, device and system
US20150113645A1 (en) System and method for operating point and box enumeration for interval bayesian detection
WO2016204839A2 (en) System and method to detect attacks on mobile wireless networks based on network controllability analysis
Wang Fast Localization Model of Network Intrusion Detection System for Enterprises Using Cloud Computing Environment
Farhat et al. A signal verification approach to cognitive radio network security
US20240273189A1 (en) Reduction of Security Detection False Positives
Nguyen et al. The ability to detect the linear attack of WL-CUSUM and FMA algorithms
US20240095372A1 (en) Offline platform information technology cyber-physical risk mitigation
CN109145601B (en) Malware detection system attack prevention
US11868917B1 (en) Sensor-based door lock confidence
Wen et al. 6G-XSec: Explainable Edge Security for Emerging OpenRAN Architectures

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: UNIVERSITY OF VIRGINIA, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HOROWITZ, BARRY;PULIDO, JOSEPH VINCE;JONES, RICK A.;AND OTHERS;SIGNING DATES FROM 20180205 TO 20180221;REEL/FRAME:047738/0634

Owner name: UNIVERSITY OF VIRGINIA PATENT FOUNDATION, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:UNIVERSITY OF VIRGINIA;REEL/FRAME:047738/0721

Effective date: 20180221

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION