CN112740133A - System and method for monitoring the technical state of a technical installation - Google Patents
System and method for monitoring the technical state of a technical installation Download PDFInfo
- Publication number
- CN112740133A CN112740133A CN201980062659.0A CN201980062659A CN112740133A CN 112740133 A CN112740133 A CN 112740133A CN 201980062659 A CN201980062659 A CN 201980062659A CN 112740133 A CN112740133 A CN 112740133A
- Authority
- CN
- China
- Prior art keywords
- anomaly
- technical
- univariate
- alarm
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000012544 monitoring process Methods 0.000 title description 11
- 238000009434 installation Methods 0.000 title description 3
- 230000002159 abnormal effect Effects 0.000 claims abstract description 30
- 238000004590 computer program Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims description 59
- 238000005070 sampling Methods 0.000 claims description 45
- 230000008569 process Effects 0.000 claims description 38
- 230000015654 memory Effects 0.000 claims description 37
- 238000005516 engineering process Methods 0.000 claims description 17
- 230000001186 cumulative effect Effects 0.000 claims description 11
- 238000005315 distribution function Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 5
- 230000004931 aggregating effect Effects 0.000 claims description 2
- 230000000875 corresponding effect Effects 0.000 description 33
- 238000004364 calculation method Methods 0.000 description 19
- 238000004891 communication Methods 0.000 description 18
- 238000003860 storage Methods 0.000 description 13
- 206010000117 Abnormal behaviour Diseases 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 7
- 230000006399 behavior Effects 0.000 description 7
- 238000012800 visualization Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000001994 activation Methods 0.000 description 4
- 239000000654 additive Substances 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 239000007788 liquid Substances 0.000 description 4
- 101150017422 HTR1 gene Proteins 0.000 description 3
- 230000000996 additive effect Effects 0.000 description 3
- 238000001816 cooling Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 102100032599 Adhesion G protein-coupled receptor B3 Human genes 0.000 description 2
- 238000006424 Flood reaction Methods 0.000 description 2
- 101000796801 Homo sapiens Adhesion G protein-coupled receptor B3 Proteins 0.000 description 2
- 101100024116 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) MPT5 gene Proteins 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 2
- 230000002547 anomalous effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000004886 process control Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- HWFKCAFKXZFOQT-UHFFFAOYSA-N 1-(3,6-dibromocarbazol-9-yl)-3-piperazin-1-ylpropan-2-ol;dihydrochloride Chemical compound Cl.Cl.C12=CC=C(Br)C=C2C2=CC(Br)=CC=C2N1CC(O)CN1CCNCC1 HWFKCAFKXZFOQT-UHFFFAOYSA-N 0.000 description 1
- 102100036321 5-hydroxytryptamine receptor 2A Human genes 0.000 description 1
- 101150091111 ACAN gene Proteins 0.000 description 1
- 206010000372 Accident at work Diseases 0.000 description 1
- 102100032605 Adhesion G protein-coupled receptor B1 Human genes 0.000 description 1
- 102100032601 Adhesion G protein-coupled receptor B2 Human genes 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 101000783617 Homo sapiens 5-hydroxytryptamine receptor 2A Proteins 0.000 description 1
- 101000796780 Homo sapiens Adhesion G protein-coupled receptor B1 Proteins 0.000 description 1
- 101000796784 Homo sapiens Adhesion G protein-coupled receptor B2 Proteins 0.000 description 1
- 241001125831 Istiophoridae Species 0.000 description 1
- 239000002253 acid Substances 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003889 chemical engineering Methods 0.000 description 1
- 230000003930 cognitive ability Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 238000013021 overheating Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000004801 process automation Methods 0.000 description 1
- 238000000746 purification Methods 0.000 description 1
- 239000011541 reaction mixture Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0208—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
- G05B23/0213—Modular or universal configuration of the monitoring system, e.g. monitoring system having modules that may be combined to build monitoring program; monitoring system that can be applied to legacy systems; adaptable monitoring system; using different communication protocols
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0218—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
- G05B23/0221—Preprocessing measurements, e.g. data collection rate adjustment; Standardization of measurements; Time series or signal analysis, e.g. frequency analysis or wavelets; Trustworthiness of measurements; Indexes therefor; Measurements using easily measured parameters to estimate parameters difficult to measure; Virtual sensor creation; De-noising; Sensor fusion; Unconventional preprocessing inherently present in specific fault detection methods like PCA-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Automation & Control Theory (AREA)
- Computing Systems (AREA)
- Operations Research (AREA)
- Probability & Statistics with Applications (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Testing And Monitoring For Control Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A system (100), method and computer program product are provided for determining an abnormal technical state of a technical system (200). The computer system (100) receives a plurality of signals from the technical system (200), wherein each signal (S1 to Sn) reflects a technical state of at least one system component. The system further retrieves high alarm thresholds (H1 to Hn) and low alarm thresholds (L1 to Ln) associated with the respective received signals (S1 to Sn) from the alarm management system (300). Signal values in a range between the associated high alarm threshold and the associated low alarm threshold reflect normal operation of the respective system component. For each signal (S1), a univariate distance from its associated alarm threshold (H1/L1) is calculated to quantify the degree of anomaly for the respective system component. An Aggregate Anomaly Indicator (AAI) is calculated based on the univariate distance, which reflects the technical state of the entire technical system (200). An operator (10) is provided with a comparison of an Aggregate Anomaly Indicator (AAI) with a predetermined anomaly threshold (AAT).
Description
Technical Field
The present invention relates generally to monitoring of technical equipment and more particularly to an alarm tool to support an operator of the technical equipment in a control device to avoid malfunctions.
Background
Many technical systems (such as, for example, technical devices in an automation system) are capable of generating an alarm to indicate to an operator that interaction with the technical device is required in order to take a corresponding action in response to the generated alarm. An alarm as used herein and as defined in technical standard IEC 62682 section 3.1.7 is an audible and/or visual means of indicating to an operator of a device failure, process deviation, or abnormal condition that requires a timely response (see also international automation association ISA-18.2). An example of a particular alarm is referred to as alarm activation.
In real world situations, a series of alarm activations depending on a single root cause is often generated, where in practice a single alarm would be sufficient to indicate a problem in the technical system. Such a series of alarm activations is commonly referred to as an alarm flood (flood). Alarm flooding scenarios are characterized by a combination of multiple alarm activations that occur repeatedly. In other words, the same or similar combinations of alarms typically occur in multiple alarm floods. Generally, an ever high alarm rate indicates poor alarm quality. Good alarm quality is achieved when the following situation occurs:
-each alarm warning, notification and guidance,
-presenting the alarms at a rate that the operator can cope with, and
-alerting detectable problems as early as possible.
There are different approaches for monitoring large and complex industrial systems to detect abnormal situations and generate corresponding alarm notifications to the operator(s). For example, statistical data-driven methods for (Multivariate) process monitoring such as PCA and PLS (see chemical engineering journal of canada, 69(1), 35-47, of which "Multivariate statistical monitoring of process operating performance" by Kresta, Macgregor and Marlin in 1991) use statistical analysis applied to actual measured values or technical status data. Alternatively, intelligent visualization approaches, such as parallel coordinate transformations combined with convex hulls (covex hull) computed for each pair of variables (see multivariable operation; U.S. patent application US20080234840A 1; Brooks et al), allow the range of process variables to be displayed in parallel coordinates as a pair of linear curves between corresponding parallel axes. However, such statistical or mathematical analysis relies solely on historical values of process variables and does not take into account any process awareness of the monitored process and therefore suffers from a large number of false positives in the detected alarms, as it does not become clear what is actually an abnormal situation.
A certain deviation of the technical state parameter may be identified by statistical monitoring in order to trigger an alarm notification, although the deviation may still be considered to be within normal operation of the respective device.
As a result, it is difficult for the operator to retrieve reliable anomaly information about the overall technical state of the monitored technical installation from the alarm notification solely on the basis of such a statistical analysis.
Disclosure of Invention
Therefore, there is a need to improve alarm detection for operators in the following areas: the operator can quickly determine the overall technical status of the monitored equipment in order to reduce the number of false positives and enable the operator to take appropriate corrective action if required.
Embodiments of a technical solution to the above problem are disclosed according to the method, the computer program product and the computer system of the independent claims.
In one embodiment, a computer-implemented method for determining an abnormal technical state of a technical system is provided. In another embodiment, the computer system is configured to perform the method by executing a corresponding computer program comprising program instructions that cause the computer system to perform the corresponding method steps when the computer program is loaded into the memory of the computer system and the instructions are processed with the one or more processors.
The computer system receives a plurality of signals from a technical system. Each signal is sampled over time (using the same sampling frequency), or re-sampled in a pre-processing step, in order to ensure the availability of measured or estimated values of the plurality of signals at each instance of the calculation), and to reflect the technical state of at least one system component of the technical system. That is, each signal relates to one system component, but a particular system component can be monitored by an operator via multiple signals. Typically, the technical system is monitored by one or more human operators. The totality of all signals reflects the overall technical state of the entire technical system. However, a human operator cannot derive information about the overall technical state of the technical system from a single signal at the sensor level, since there is no possibility for a human to understand the multiple signals received from the sensors in real time.
The computer system assists the operator in this monitoring task by deriving from the received sensor signals a single aggregate anomaly indicator reflecting the technical state of the entire system.
An alarm management system is associated with the technical system. The alarm management system stores information about alarms associated with the signals. Alarm management systems are systems for prioritizing, grouping and classifying alerts and event notifications used in supervisory control and data acquisition (SCADA) to improve the provision of technical status information to operators. Most often, the main problems are: there are too many alarms annunciated in a plant upset, which is often referred to as an alarm flood as explained above. However, there may be other issues with alarm systems such as poorly designed alarms, improperly set alarm points, failed announcements, unclear alarm messages, and so forth. Poor alarm management is one of the main causes of unplanned downtime and major industrial accidents. The alarm management system stores high and low alarm thresholds associated with respective received signals. Signal values of particular signals that are within a range between the associated high alarm threshold and the associated low alarm threshold reflect normal operation of the respective at least one system component. In other words, the alarm threshold for a particular signal is based on historical knowledge of normal operation and abnormal system behavior. The alarm threshold value reflects a critical value beyond which the corresponding signal value is no longer considered to be within the normal operating range. The alarm management system typically issues a per-signal alarm whenever a signal value exceeds a corresponding alarm threshold. Since many technical state parameters are relevant, this typically results in so-called alarm flooding of the operator with information that cannot be resolved by the operator.
The alarm management system can be an integral part of the computer system, or the alarm management system can be a remote system as follows: communicatively coupled with the computer system such that the computer system can access data available in the alarm management system. The computer system retrieves, via an appropriate interface, the high and low alarm thresholds associated with the respective received signals from the alarm management system. The retrieval of the alarm threshold value may occur, for example, as some initialization step for the computer system. That is, the computer system may retrieve all available alarm thresholds from the alarm management system before the computer system begins performing any calculations. The retrieval may be repeated at regular update intervals to account for changes in the alarm management system. For example, the update retrieval may retrieve the alarm threshold only for signals that are actually monitored via the computer system.
The computer system has a data processor configured to perform computing tasks as described below. First, the data processor calculates, for each signal having an associated alarm threshold at each sampling time point, a univariate distance from its associated alarm threshold. In general, univariate distance is the (simple) distance between the values of a single variable j for two observations I and I. In the present application, the univariate distance is the maximum value of the distance between the value of the respective signal and its associated alarm threshold value to quantify the degree of anomaly for the respective at least one system component. The univariate distance d (t) for a particular signal at the sampling time point t can be expressed by the following mathematical formula:
whereinx(t)Is at time oftThe samples of the signal of (a) are,x h is a high alarm threshold associated with the signal as defined in the alarm management system,x l is a low alarm threshold associated with the signal, andais a conventional value of a variable (x l <a<x h )。
For example, the user may, by default,acan be selected asBut, however, dox l Andx h other values in between can be selected, for example, by estimating normal operating values based on normal operating data.
In one embodiment, the univariate distance to a particular signal at a particular sampling time point can be calculated as a piecewise linear exponent such that:
in thatAt the moment or atWhen the temperature of the water is higher than the set temperature, (F2b)
in other words, if the sampled signal value is between the low alarm threshold and the high alarm threshold, the distance value is between 0 and 1 (F2 a); if the sampled signal value is less than or equal to the low alarm threshold or greater than or equal to the high alarm threshold, then the distance value is 1 (F2 b); and if the sampled signal value corresponds to a predefined parameter value reflecting normal operation, the distance value is 0 (F2 c).
In an alternative embodiment, the univariate distanced(t)Can be calculated as a smooth exponent instead of the piecewise linear calculation above.
For example,d(t)can be calculated using exponential smoothing as:
for example,a= 2 relates to parabolic smoothing, anda= 3 relates to hyperbolic smoothing.
Furthermore, the real signal is noisy, since its "normal" value is distributed in Gaussian over this value "a"peripheral fluctuation". Thus, the calculation of univariate distances can be carried out by introducing a "regular range" of the definition signal [ al, a2]In which the upper limit of the intervala 2 Less than the corresponding high alarm thresholdx h And lower limit of the intervala l Greater than the corresponding low alarm thresholdx l . Such intervals are used as for the conventional rangea l Anda 2 dead band (d) ((d))x l < a l < a 2 <x h ). The dead band (sometimes referred to as neutral zone or dead zone) is the band of input values in the domain of the transfer function in the control system or signal processing system: where the output is zero (output is 'dead' -no action occurs). The dead band region can be used in a control system, such as a servo amplifier, to prevent oscillations or repeated activation-deactivation cycles.
In the case of such dead bands, the univariate distance for a particular signald(t)Can be calculated as the following indices:
In other words, if the sampled signal value is within a dead band interval for a particular signal at a particular sampling time point, the distance value is 0 (F4 a); for signal values below the lower limit of the interval of the conventional range, the distance value is(F4 b), and for signal values above the upper limit of the interval of the conventional range, the distance value is(F4c)。
Once the univariate distance is determined by the data processor, further calculation steps are performed. At each sampling time point, the computer system now calculates an aggregate anomaly indicator reflecting the technical state of the entire technical system based on the univariate distances at the respective sampling time point.
In one embodiment, the aggregate anomaly indicator is calculated as Euclidian distance d (t) based on the univariate distance of the corresponding signal and the total number of signals:
in an alternative embodiment, the aggregate anomaly measure is calculated as a weighted Euclidian distance D based on the univariate distance of the corresponding signal and the total number of signalsw(t), wherein each univariate distance contribution is weighted with a weighting factor corresponding to the severity of the alarm associated with the respective signal as defined in the alarm management system:
Whereind i (t)Corresponding to the signaliA univariate distance of, andNcorresponding to the total number of received signals.
The aggregate anomaly indicator now reflects the technical state of the entire technical system, since the aggregate anomaly indicator includes technical state information about all monitored system components. In other words, presenting the aggregate anomaly indicator to the operator provides the operator with a visual indication of internal states that are prevalent in the technical system. To enable an operator to quickly identify abnormal system behavior and take corrective action, the system provides a comparison of the aggregate abnormality index to a predetermined abnormality threshold. The anomaly threshold value is selected to ensure with a given probability (or confidence, e.g., 95%) that the aggregate anomaly indicator value below the anomaly threshold value reflects normal operation of the technical system. The given probability may be defined by the user as the target probability, or the given probability may be a predefined confidence value. For example, as known to those skilled in the art, the anomaly threshold can be determined by using a cumulative distribution function of aggregated anomaly indicators during normal operation of the technical system. When the aggregate anomaly indicator exceeds an anomaly threshold, an anomalous technical state is determined.
The aggregate anomaly index AAI provides simplified technical state information for the entire system that can be easily handled by an operator. For example, at the moment the AAI exceeds an anomaly threshold in the corresponding graphical visualization, the operator is alerted: the technical system exhibits abnormal behavior. In other words, the AAI is a trigger for the operator to perform a more thorough system analysis to identify the root cause of the abnormal behavior. The trigger point in the AAI curve is typically reached even before an alarm is triggered by the alarm management system, as alarm triggering typically depends on patterns in signal behavior that can easily be expanded over a longer period of time. The AAI does not require any pattern recognition but only considers the aggregation indicator for all signals. As a result, no non-high performance hardware and complex models are required for pattern recognition, since the claimed approach is a pure data-driven approach that can be easily used for technical systems in a plant without the need to adapt hardware or OPC alarm and event (a & E) servers.
In order to apply the method for determining an abnormal technical state of a technical system, it is advantageous when the technical system is operating in a steady state. Thus, prior to the calculation step for the AAI calculation, a steady state detection algorithm can be used to determine whether the technical system is operating in a steady state process. If the process is not in a steady state, the AAI calculation can be suppressed. This optional switching function conserves computing resources for periods of time in which meaningful AAI calculations are not possible. Steady state detection algorithms are well known in the art and are disclosed in a number of papers, such as, for example, the "efficient method for on-line identification of steady state" (An effective method for on-line identification of steady state) "made by Cao, s. and Rhinehart, r.r. in journal of process control, 5(6), 363-374.
As mentioned previously, the AAI can be interpreted as a trigger function for the operator to carry out a root cause analysis for the technical system. The disclosed method is also capable of supporting an operator in this task. In one embodiment, the computer system further provides the operator with a subset of the univariate distances at the respective sampling time points, wherein the subset relates to such univariate distances having the highest contribution to the increase of the aggregate anomaly measure. The size of the subset may be configurable by an operator. For example, the operator may define 5 or 10 to configure the computer system to show the drill down option, the first 5 or the first 10 univariate distances for the AAI. As a result, the operator is immediately able to see which signal-and therefore which system component-is primarily responsible for AII increases that exceed the anomaly threshold.
In a further alternative embodiment, the support for root cause analysis is further improved. The component hierarchy of the technical system may define a plurality of functional blocks of the technical system. The functional blocks can be represented by child nodes of the technical system in the component hierarchy. Each functional block can again include a plurality of child nodes that include additional functional blocks and/or system components. That is, a hierarchy can describe multiple levels of functional blocks (nested functional blocks). The computer system is now able to calculate an aggregate Block Anomaly Indicator (BAI) for the respective function block at each sampling time point. The calculation for a particular function block is thus based on a subset of the univariate distances associated with the particular function block (at the respective sampling time points). The calculated block anomaly indicator(s) (BAI) reflects the technical state of the functional blocks of the technical system. The computer system can now also provide the operator with a comparison of the Block Anomaly Indicator (BAI) to a predetermined block anomaly threshold. Furthermore, the block exception threshold is selected to ensure with a given probability that the aggregate block exception indicator value below the block exception threshold reflects normal operation of the particular functional block. By using such a BAI in addition to the AAI and univariate distance, the operator receives simplified technical state parameters for each functional block defined in the component hierarchy. That is, the operator can quickly drill down to the corresponding function block (e.g., boiler, pump, turbine, or process zone) of the technical system when the AAI exceeds the anomaly threshold and identify the function block that contributes most to the anomaly. Similar to univariate distances, computer systems can provide an ordered list (ranking list) of the function blocks that contribute most to abnormal behavior. Of course, for each BAI, it is possible to drill further down for the corresponding univariate distance. With this option, the operator can quickly identify the system components of the function block that cause the failure of the entire system.
In one embodiment, a particular state of technology parameter may be represented by a plurality of sensor signals that provide redundant information specifying a particular state of technology. In such a scenario, the calculation of the univariate distance for the technical state parameter can be carried out in a robust manner against failure of the (robust against) sensor providing redundant information. In other words, robust against failure means that failure of a single sensor does not have a significant impact on the reliability of the technical state parameter reflected by the corresponding univariate distance. This is achieved by aggregating univariate distances associated with multiple sensor signals to provide robust univariate distances for a particular state of the art parameter. Even if one of the signals disappears (e.g., because the sensor's battery or data communication link fails), the robust univariate distance still provides meaningful information about the normal/abnormal behavior of the corresponding system component.
In one embodiment, a computer program product for determining an abnormal technical state of a technical system is provided. The program comprises instructions which, when loaded into the memory of the computer system and executed by at least one processor of the computer system, cause the computer system to carry out the method steps as disclosed herein.
A computer system for executing the computer program can be described by functional modules configured to perform the method steps at system runtime. The computer system has an interface that receives a plurality of signals from the technical system, wherein each signal is sampled over time and reflects a technical state of at least one system component. Further, via the interface, the computer system retrieves, from an alarm management system associated with the technical system, a high alarm threshold and a low alarm threshold associated with the respective received signal. Signal values of particular signals that are within a range between the associated high alarm threshold and the associated low alarm threshold reflect normal operation of the respective at least one system component.
Furthermore, the computer system has a data processor to calculate, for each signal having an associated alarm threshold, at each sampling time point, a univariate distance from its associated alarm threshold as a maximum of the simple distances between the value of the respective signal and its associated alarm threshold to quantify the degree of anomaly for the respective at least one system component; and calculating, at each sampling time point, an aggregate anomaly indicator reflecting the technical state of the entire technical system based on the univariate distances at the respective sampling time point. As used herein, the term "at each sampling time point" refers to each sampling time point used for the calculating step. That is, with a high sampling frequency, it may be sufficient to carry out the calculation step only for every other, every third, etc. sampling time point. The skilled person will understand that it is not necessary to use each physical sampling point in time in any scenario.
A user interface of the computer system provides the operator with a comparison of the aggregate anomaly index to a predetermined anomaly threshold. The anomaly threshold ensures with a given probability (confidence) that the aggregate anomaly indicator value reflects normal operation of the technical system when it falls below the anomaly threshold. In other words, the technical system transitions to an abnormal technical state when the aggregate anomaly index exceeds the anomaly threshold.
In one embodiment, the computer system further comprises a computation switch utilizing a steady State Detection Algorithm (SDA) configured to determine whether the technical system is operating in a steady state process, and to suppress subsequent computation steps when the process is not in a steady state.
In one embodiment, the computer system has a component hierarchy of the technical system. The hierarchy defines a plurality of function blocks as child nodes of the technical system, wherein each function block comprises a plurality of child nodes comprising further function blocks and/or system components. The processor of the computer system is capable of calculating, at each sampling time point (i.e., the sampling time point for calculation), an aggregate block anomaly indicator BAI for a particular function block at the respective sampling time point based on a subset of univariate distances associated with the particular function block, wherein the block anomaly indicator reflects a technical state of the function block. The user interface can provide the operator with a comparison of the BAI to a predetermined block anomaly threshold. The block exception threshold ensures with a given probability that the aggregate block exception indicator value below the block exception threshold reflects normal operation of the particular functional block.
In one embodiment, the user interface further provides the operator with a subset of univariate distances at respective sampling time points, wherein the subset relates to such distances having the highest contribution to the increase of the aggregate anomaly measure or the respective block anomaly measure. The subset has a configurable (e.g., operator configurable) or predefined size.
Further aspects of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as described.
Drawings
FIG. 1 includes a block diagram of a computer system for determining an abnormal technical state of a technical system, according to an embodiment;
FIG. 2 is a simplified flow diagram of a computer-implemented method for determining an abnormal technical state of a technical system, according to an embodiment;
FIG. 3A illustrates univariate distances for an example signal reflecting a technical state system component of a technical system;
FIG. 3B illustrates computing an aggregate anomaly index for a technical system, according to an embodiment;
FIG. 3C illustrates the type of cumulative distribution function that can be used to determine an anomaly threshold according to an embodiment;
FIG. 3D illustrates univariate distances for a subset of signals having a high contribution to aggregate anomaly indicators, according to an embodiment;
FIG. 4 illustrates an example of a component hierarchy of a technical system including functional blocks;
5A-5C illustrate a real world example scenario for which aggregate anomaly indicators are determined;
FIG. 6 is a diagram illustrating an example of a general purpose computer device and a general purpose mobile computer device that may be used with the techniques described herein.
Detailed Description
Fig. 1 is a block diagram of an example embodiment of a computer system 100 for determining an abnormal technical state of a technical system 200 according to an embodiment. The computer system 100 and the technical system 200 are communicatively coupled, and the computer system 100 is configured to monitor the technical state of the technical system 200. For example, the technical system 200 can be a process plant, a power plant, or any other device that performs an industrial process. Typically, industrial processes in a plant (e.g., chemical, oil, paper and pulp mills, etc.) are controlled by an automated system that interconnects sensors, controllers, operator terminals, and actuators using a network. Such automation systems often use a control system architecture known as supervisory control and data acquisition (SCADA). The computer system 100 has an interface 110 which receives a plurality of signals S1 to Sn from the technical system 200. Each signal is sampled over time and reflects the technical state of at least one system component. For example, the temperature signal may reflect the technical state of the motor assembly by indicating the temperature of the motor (too high a temperature may be an indicator for overheating). At the same time, further signals (such as vibration sensor signals) may also provide technical status information about the motor, since too high a degree of vibration may indicate a problem with the bearings of the motor. Those skilled in the art know which types of sensors are suitable in a technical system for monitoring the technical state of the respective components or functional blocks of the technical system. The function block can include a plurality of system components that together perform a certain function (e.g., gas purification).
Fig. 2 is a simplified flow diagram of a computer-implemented method 1000 for determining an abnormal technical state of the technical system 200. The computer system 100 is capable of performing the method when a computer program is loaded into the memory of the computer system 100, wherein the computer program has computer readable instructions which, when loaded and executed by at least one processor of the computer system 100, cause the computer system to carry out the steps of the method 1000.
Hereinafter, the computer system 100 of fig. 1 is disclosed in the context of the flow chart of fig. 2. For this reason, the following description uses reference numerals referring to fig. 1 and 2. Optional components and optional method steps of the computer system 100 are illustrated in the respective figures by dashed lines.
To receive 1100 the sensor data S1 to Sn from the technical system 200 via the interface 110, the computer system 100 can use any suitable protocol standard for process automation protocols. For example, one skilled in the art may select an appropriate protocol from the protocol standards listed in the Wikipedia list of automation protocols available as follows: https:// en. wikipedia. org/wiki/List _ of _ automation _ protocols.
Additionally, the computer system 100 is communicatively coupled with an alarm management system 300 associated with the technical system 200. The alarm management system 300 can also be an integral part of the computer system 100, or the alarm management system 300 can run on a remote computer accessible by the computer system 100 over a corresponding network. The alarm management system 300 stores or determines the high alarm thresholds H1 to Hn and the low alarm thresholds L1 to Ln associated with the respective signals S1 to Sn of the technical system 200. Whereby a signal value of a particular signal in a range between the associated high alarm threshold and the associated low alarm threshold reflects the normal operation of the respective system component monitored by the particular signal. Alarm management is typically used in process manufacturing environments that are controlled by operators using supervisory control systems, such as DCS, SCADA, or Programmable Logic Controllers (PLC). Such systems may have hundreds of individual alarms that are often designed with only limited consideration of other alarms in the system. Since humans can only do one thing at a time, and can focus on a limited number of things at a time, some way is needed to ensure: alarms are presented at a rate that can be well understood (asset) by human operators, particularly when the plant is cluttered or in an abnormal situation. Advantageously, the alarm should be able to focus the operator's attention on the most important problem he or she needs to deal with, for example using a priority indicating a degree or rank of importance (rank). However, although the alarm management system includes all awareness of the alarm situation for the associated technical system (reflected by the low/high alarm threshold), the system does not provide an aggregated indicator reflecting the overall technical state of the entire plant. In this context, information about the alarm threshold is still valuable, as it includes knowledge about the entire alarm history of the technical system. In an initialization step, the computer system 100 is able to retrieve 1050 from the alarm management system 300 the high alarm thresholds H1 to Hn and the low alarm thresholds L1 to Ln associated with the respective signals S1 to Sn of the technical system 200 and use such data for the next data processing steps to determine an index reflecting the technical state of the entire technical system 200 based on the received signal data and alarm thresholds. This index will be referred to as the aggregate anomaly index AAI of the technical system 200. Optionally, the computer system can carry out the update retrieval step 1200 to adapt to changes in the alarm management system during operation of the technical system. Such update retrieval 1200 may be limited to alarm thresholds associated with signals actually monitored via the computer system 100.
The computer system 100 has a data processor 120 with various modules for carrying out data processing tasks with respect to received input data (signals S1 through Sn and high/low alarm threshold pairs (H1/L1 through Hn/Ln), hi an example, each signal S1 to Sn has an associated alarm threshold pair, in real technology systems, the aggregate anomaly indicator is calculated using alarms having associated limits as follows: for example, the absolute alarms, deviation alarms, rate of change alarms as defined by the standard NAMUR NA 102 for the application of alarm management the version of the NA 102 specification with date 2018, month 02 and day 10 can be obtained in the following: https: html www.namur.net/de/empfehlung-u-arbeitsblaetter/aktuelle-nena.
For each signal (e.g., signal S1) having an associated alarm threshold pair (e.g., H1/L1), the univariate distance module 121 of the data processor calculates 1300 a univariate distance (e.g., dS1 (t)) at each sampling time point to the alarm threshold associated with the respective signal. The univariate distance is determined as the maximum value of the distance between the value of the respective signal and its associated alarm threshold to quantify the degree of anomaly for the system component(s) associated with the respective signal. Thus, the calculations may be used according to the formulas F1, F2a through F2 c. Alternatively, exponential smoothing may be used according to the formulae F3, F4a to F4 c. The calculated univariate distance is then provided as an input to an anomaly indicator module 122 of the data processor.
The module 122 calculates 1400 an aggregate anomaly indicator AAI reflecting the technical state of the entire technical system 200 at each sampling time point based on the univariate distances at the respective sampling time point. For example, the aggregate anomaly indicator at a particular sampling time point may be calculated as a Euclidian distance based on the univariate distance of the corresponding signal and the total number of signals according to equation F5.
Alternatively, it may be calculated as a weighted Euclidian distance based on the univariate distance of the corresponding signal and the total number of signals according to the formula F6. Thus, each univariate distance contribution is weighted with a weighting factor corresponding to the severity of the alarm associated with the respective signal as defined in the alarm management system. In other words, an alarm for a signal whose associated component may have a lower impact on the overall technical performance of the technical system 200 may contribute less to the aggregate anomaly indicator.
The computer system 200 also has a User Interface (UI) component 130. The UI 130 can be implemented as any kind of Human Machine Interface (HMI) allowing an operator 10 of the technical system to communicate with the computer system 200. The UI 130 can include respective input/output components, including, but not limited to, audiovisual components including a display/sound output component that conveys information to a user and a data input component (e.g., a keyboard, mouse, touch screen, etc.) that receives input data from a user. The UI 130 provides 1500 the aggregate anomaly index AAI to the operator 10 for comparison to a predetermined anomaly threshold. The anomaly threshold ensures with a given probability that the aggregate anomaly index value reflects normal operation of the technology system 200 when it falls below the anomaly threshold. In other words, when the aggregate anomaly index value is less than the anomaly threshold value, then there is a given probability (e.g., with a confidence of 0.95) that the technology system 200 is operating normally. The probability may become even higher (e.g., 0.99) by using a corresponding anomaly threshold. Advantageously, the anomaly threshold is determined by using a cumulative distribution function of the aggregate anomaly indicators AAI during normal operation of the technical system 200. The computational tasks in steps 1300, 1400, and 1500 of method 1000 are discussed in more detail using the descriptions of fig. 3A-3C.
In an alternative embodiment, the data processor 120 has a computation switch 123. The computation switch is implemented as a steady state detection algorithm SDA capable of determining 1250 whether the technical system 200 is operating during steady state. If the technical system is not in a steady state ("NO"), the computer system does not perform any of the computational tasks of steps 1300, 1400, 1500. Otherwise ("yes"), the method 1000 proceeds to step 1300. For the computation task, it is advantageous if the process run by the technical system is in a steady state. Therefore, the calculation switch 121 is able to switch off the calculation of all the indexes (univariate distance and aggregate anomaly index) during the transient phase. For example, well-known stability detection algorithms can be used to identify when the calculation of the index should be started again (e.g., Cao, S., and Rhinehart, R. et al (1995). efficient methods for on-line identification of steady state. journal of Process control, 5(6), 363-.
In a further alternative embodiment, the computer system 100 has access to a hierarchy of components of the technical system 200. Such a component hierarchy may be stored by the computer system itself, or such a component hierarchy may be provided by the technical system or its associated automation system. The component hierarchy defines a plurality of function blocks as child nodes of the technical system. Each functional block can comprise a plurality of sub-nodes, which may be further functional blocks and/or system components of the technical system. In other words, the functional blocks serve to group together a plurality of system components which can be associated with the same function of the technical system. Such function blocks are sometimes also referred to as process blocks (e.g., boilers, pumps, turbines, or process areas). Details of the component hierarchy are discussed in the context of FIG. 4.
In this optional embodiment, the data processor 120 is further configured to calculate 1450 the aggregate block anomaly indicator(s) BAI at each sampling time point. The block anomaly indicator(s) BAI reflects the technical state of the corresponding functional block(s). Based on the subset of univariate distances (at the respective sampling time points) associated with a particular function block, a corresponding aggregate block anomaly indicator BAI is calculated for the particular function block. This calculation is carried out in a similar manner to the calculation of the AAI, but only for a subset of the univariate distances associated with a particular function block. Further, the user interface 130 provides 1550 a comparison of the particular block anomaly index BAI to a predetermined block anomaly threshold to the operator. Similar to the comparison against AAI, the block exception threshold ensures with a given probability that the aggregate block exception indicator value reflects normal operation of a particular functional block when below the block exception threshold. In this embodiment, the operator can drill down to the BAI from the original AAI of the functional block of the technical system. This allows the operator to carry out a root cause analysis at the level of the function blocks of the technical system and to quickly identify the function block(s) that contribute the most to the abnormal situation of the technical system as a whole, as identified by the AAI.
In a further alternative embodiment, a drill-down function to the level of the system component is enabled. In this embodiment, the UI 130 further provides 1600 to the operator a subset of univariate distances TOPm at the respective sampling time point. Thus, the subset TOPm relates to such distances having the highest contribution to the increase of the aggregate anomaly measure, wherein the size m of the subset TOPm is predefined. Since each univariate distance is directly associated with a signal, which in turn is associated with a system component, drill-down to the component level is enabled. For example, the operator may set the size m such that he receives a certain amount of technical state information that can still be handled with his cognitive abilities. Different operators may choose different sizes. The computer system may set a default value that can be selected as the average size used by all users of the computer system. Based on the technical status information communicated to the operator 10 by the AAI (and optional drill-down information regarding the BAI and/or system components), the operator can initiate corrective action 20 in response to the determined anomaly indicator(s). As a result, the computer system assists the operator in carrying out the technical task of monitoring the technical system and in interacting with the technical system when required.
In further alternative embodiments, as set forth previously, a particular state of the art parameter (such as the state of a chemical reactor) may be represented by a plurality of sensor signals (such as, for example, temperatures measured by a plurality of temperature sensors). The sensors provide redundant information in a particular state of the art for a given reactor. Nonetheless, each of the temperature signals is indicative of normal or abnormal operation of the reactor. The data processor may aggregate univariate distances associated with the plurality of sensor signals to provide a robust univariate distance for a particular technology state parameter. In the reactor example, univariate distances corresponding to the temperature signals of the respective temperature sensors can be aggregated. If one of the sensors fails, there is still a meaningful distance value available to characterize the state of the technology of the reactor. For example, "two-thirds" voting can be used to obtain the actual reactor temperature in the event of a failure of one sensor. In other cases, sensor redundancy may be used, for example, where a first sensor is used by the control system and a second sensor is used by the safety system.
Fig. 3A illustrates univariate distances d1 to d34 for real-world example signals reflecting technical state system components of a technical system. Some of the signals show abnormal behavior reflected by the increase of the respective univariate distances (e.g., d3, d4, d13, d15, d20, d21, etc.) to the upper (abnormal) limit of the univariate distance range at certain points in time. Some signals (e.g., d 5-d 10) show no improvement in univariate distance at all. Some signals (e.g., d18, d 19) show intermediate increases in univariate distances normalized again without reaching an upper limit.
Fig. 3B shows a view 360 with an aggregate anomaly indicator AAI for the technical system provided to an operator of the technical system. The view 360 also includes a visualization of the anomaly threshold value AAI to which the AAI is compared. AAI is calculated based on the univariate distances of fig. 3A according to the formulas F5 to F6. The anomaly threshold value AAT is predetermined such that the aggregate anomaly indicator value reflects a normal operation of the technical system with a given probability p (e.g. p = 0.95) below the anomaly threshold value AAT. Advantageously, the anomaly threshold value AAT is determined by using a cumulative distribution function of the aggregate anomaly indicators AAI during normal operation of the technical system. In probability theory and statistics, the Cumulative Distribution Function (CDF) of the true-value random variable X1 evaluated at X is the probability that X1 will take a value less than or equal to X. In the case of a continuous distribution, it gives a region under the probability density function from minus infinity to x.
FIG. 3C illustrates the CDF type of cumulative distribution function that can be used to determine an anomaly threshold. Cumulative distribution functions are explained in detail in numerous publications, such as, for example, in the Introduction to Statistical modeling (Introduction) made by Annette J. Dobson, Chapman, and Hall et al in 1983. The CDF type 371 shows a cumulative distribution function of discrete probability distributions. The CDF type 372 shows a cumulative distribution function of the continuous probability distribution. CDF type 373 shows a cumulative distribution function with a distribution of both continuous and discrete portions. Those skilled in the art will be able to select the appropriate CDF type for determining the anomaly threshold. In many cases, CDF type 372 is appropriate.
FIG. 3D shows univariate distances D20, D21, D25, D32, D33 for a subset of signals having a high contribution to aggregate anomaly indicators. In the example, the subset TOPm includes the first 5 distances among the univariate distances of fig. 3A. The subset TOPm comprises a predefined number m of univariate distances (in the example: m = 5) that make the highest contribution to the increase of the aggregate anomaly indicator AAI in fig. 3B. This subset allows the operator to immediately drill down to the most relevant signals that contribute to the abnormal system behavior indicated by the AAI when the abnormal threshold AAT in fig. 3B is exceeded. Thus, the operator can immediately focus on the potential root cause of the abnormal system behavior by focusing on the status parameters, which present a potentially high relevance for the abnormal behavior.
Fig. 4 illustrates an example of a component hierarchy 400 of the technology system 200 including function blocks 210, 220, 230. As described in detail above, the technical state of the technical system 200 is reflected by the associated AAI. The technical system 200 typically comprises a large number of system components which are monitored by means of corresponding sensor signals. The hierarchy 400 only shows a simplified view on the technical system 200 with system components 211, 212, 221, 231, 232, 233 that are considered to represent hundreds or even thousands of components of a real-world technical process system. Each system component is associated with a respective univariate distance d211, d212, d221, d231, d232, d233 that reflects the technical state of the component. Typically, certain functions of the technical system 200 are carried out by a subset of the components acting together to carry out the respective functions. In the example hierarchy 400, the components 212, 213 are grouped into function blocks 210 for which function blocks 210 an aggregate block anomaly indicator BAI1 is calculated based on a subset of univariate distances d211, d 212. For example, the function block 210 may be an additive supply (additive supply) for a reactor, the additive supply comprising a tank 211 monitored via a level meter, the univariate distance d211 being determined for the tank 211, and further comprising a pump 212 monitored via a flow meter, the univariate distance d212 being determined for the pump 212. The overall state of the art of block 210 is then reflected by BAI 1. The function blocks may also include function sub-blocks as shown in the example of function block 220 that includes function block 230 of the next level in hierarchy 400. For example, the function block 220 may reflect the reactor function of the technical system 200 including a function block 230 representing the reactor itself and a component 221 representing a peripheral component (e.g., an output valve) of the reactor function. For example, a chemical reactor may include components such as valves, tanks, heaters, pumps, coolers, sensors, safety devices (such as emergency shut-off switches, etc.). The technical state of the valve 221 can be monitored by a corresponding flow meter, and a univariate distance d232 is determined for said valve 221. The state of the art of the reactor 230 can be characterized by the level of fill, the temperature and the pressure in the reactor. Corresponding level gauges 231, temperature gauges 232, and pressure gauges 233 are system components grouped into a reactor function block 230. The associated univariate distances d231, d232 and d233 are aggregated into a corresponding aggregated block anomaly indicator BAI3 reflecting the overall technical state of the reactor. The BAI3 is then aggregated with d232 into an aggregated block anomaly indicator BAI2 that reflects the technical status of the overall reactor function 220 including the peripheral components.
Using aggregate block anomaly indicators associated with function blocks of the component hierarchy 400 of the technology system 200 allows an operator to quickly drill down to views of larger particles of the technology system and identify potential functions that cause anomalous behavior of the technology system. Similar to the TOPm view of univariate distances in FIG. 3D, the user interface for the operator may also present such a highest ranked (ranking) list of aggregate function block indicators, allowing the operator to quickly identify functions that will be analyzed in detail due to abnormal behavior contributions reflected by the associated BAI.
Fig. 5A-5C illustrate a real-world example scenario (including two reactor tanks) for which an aggregate anomaly indicator is determined. Process alarms are a known method of indicating a required action to an operator. For example, when the tank level reaches a certain limit, a high alarm is issued indicating that the tank has reached a high level. The affected device (the system component (s)) is typically sending an alert and message text that is shown to the operator in an alert list. The operator can then act accordingly and, for example, open the valve and activate the pump to lower the liquid level inside the tank. When an alarm occurs, it is typically visualized in an alarm list comprising the technical names of the corresponding component signals as shown in table 1.
Table 1: conventional alarm List example
Day/time | Signal | Type of device |
05 15:25:28 | 70_V11 | Air exhaust valve |
05 15:25:28 | 70_M1 | Fan with cooling device |
05 15:25:28 | 70_M1 | Fan with cooling device |
05 15:25:28 | 70_M2 | Fan with cooling device |
05 15:25:28 | 70_P1 | Pump and method of operating the same |
05 15:25:28 | 70_P1 | Pump and method of operating the same |
05 15:25:28 | 70_V12 | Valve with a valve body |
05 15:25:28 | 70_V13 | Valve with a valve body |
Additionally, in some cases, the alarm is also visualized directly at the device in the human machine interface. The operator can now react to those alarms. However, it is very difficult to perform any root cause analysis on this type of alarm information, as an alarm is often followed by several consequent alarms. In the example in table 1, a plurality of system components control alarms issued by two reactors. The operator is overwhelmed by a large number of process alarms (alarm floods) and cannot decide which alarm to react to. Therefore, operators need compact visualization of technical state information that indicates the current process state and allows tracking of the process state over time.
Fig. 5A shows a (simplified) part of a technical process system 500 with two connected reactor tanks R1, R2. The pump P is capable of supplying liquid to the tank. The inflow of the tank is controlled by a valve VAAnd VBAnd (5) controlling. There may be associated alert visualizations AP, AV implemented directly on the respective devicesAAnd AVB. Each reactor tank has a level gauge L1, L2 controlling the filling level of the respective tank R1, R2. The tank is discharged through a valve VCAnd VDAnd a pump PCAnd PDAnd (4) combining to control. Still further, the associated alert visualizes the AVC、AVD、APCAnd APDAre available at the respective system components. For tanks R1, R2, the level gauge values may be visualized over time as a chart with low level indicator LL (e.g., 5% of tank level) and high level indicator UL (e.g., 95% of tank level) as boundaries of the normal operating range over time. For example, the LL boundary may correspond to a low alarm threshold in the alarm management system of system 500, and the UL may correspond to a high alarm threshold. On an actual (real world) operator screen, typically only the current values of the monitored technical parameters are displayed. To get a time trend of the process variable, the operator typically opens another page of the monitoring application. Thus, the visualization of the temporal trend of the level gauges L1, L2 in fig. 5A to 5C illustrates the concept of visualization. In real systems, data showing temporal trends is typically retrieved in a multi-step interaction between the operator and the HMI.
Since the figure is simplified, in reality the reactors R1, R2 may be connected to further conduits with further inflow valves (e.g. for adding additives to the liquid stored in the tank). Further system components like temperature sensors or pressure sensors for characterizing the state of the technology of the tank are not shown in this figure. However, those skilled in the art will appreciate that real world process systems include many more system components. However, the simplified example of fig. 5A is sufficient for explaining the concept of the present invention.
For both reactors, the actual fill level increases with time and moves above the average level indicated by the horizontal average line between UL and LL near the upper limit UL. Now, the computer system is able to determine univariate distances for the level gauge signals L1, L2 and calculate AAI for the overall process system 500. The results can be visualized to the operator via the human machine interface HMI. The operator immediately sees, at time t1The AAI exceeds an AAT threshold indicating abnormal system behavior.
Fig. 5B illustrates that for both reactors R1 and R2, at time points t 1', t1 ″ later than t1, the conventional alarm threshold for the respective level gauge signal is exceeded. In other words, conventional alarm management issuing an alarm when a signal exceeds a high/low alarm threshold indicates an abnormal situation in the system at a time point t 1' occurring earliest after the time point t 1. I.e. the aggregate alarm indicator AAI warns the operator at an earlier point in time than the individual alarm at the signal level. In this case, the operator is notified "early enough" (i.e., before the alarm flood is generated by the control system) that the process is evolving towards an abnormal situation. The operator can take proactive actions on the process to avoid the process reaching an abnormal situation. This can be advantageous in the following cases: some devices are required to be shut down immediately to avoid damage to certain system components. It will be noted that fig. 5B, 5C do not show univariate distances for the gauge parameters, but rather show the signal values SR1, SR2 compared to the high alarm thresholds HTR1, HTR 2. Then, the corresponding univariate distance is calculated based on these values. The process variables SR1, SR2 can exceed their alarm thresholds HTR1, HTR2 (i.e., reach a liquid level above an upper limit). The corresponding univariate distance d (t) is bounded by 1). Aggregate anomaly indicators toFor the limit, where N is the number of process variables. The value can be normalized (i.e., divided by)) With a limit of 1 for D (t).
Fig. 5C illustrates the following situation: wherein the drill-down of the AAI in fig. 3A facilitates root cause analysis for the operator. In this example, only the level gauge L1 of the reactor R1 shows abnormal behavior, while L2 of R2 remains entirely within the normal range. The operator can concentrate on and react to the subset of process variables that are most correlated to deviations in the process anomaly indicator above its tolerable limit.
FIG. 6 is a diagram illustrating an example of a general purpose computer apparatus 900 and a general purpose mobile computer apparatus 950 that may be used with the techniques described herein. In some embodiments, computing device 900 may be related to system 100 (see fig. 1). Computing device 950 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. In the context of the present disclosure, computing device 950 may allow a human user to interact with device 900. In other embodiments, the entire system 100 may be implemented on the mobile device 950. The components shown herein, their connections and relationships, and their functions, are intended to be exemplary only, and are not intended to limit implementations of the inventions described and/or claimed in this document.
The memory 904 stores information within the computing device 900. In one implementation, the memory 904 is a volatile memory unit(s). In another implementation, the memory 904 is a non-volatile memory unit(s). The memory 904 may also be another form of computer-readable medium, such as a magnetic or optical disk.
The high speed controller 908 manages bandwidth-intensive operations for the computing device 900, while the low speed controller 912 manages lower bandwidth-intensive operations. Such allocation of functions is merely exemplary. In one implementation, the high-speed controller 908 is coupled to memory 904, display 916 (e.g., through a graphics processor or accelerator), and high-speed expansion ports 910, which high-speed expansion ports 910 may accept various expansion cards (not shown). In this implementation, low-speed controller 912 is coupled to storage 906 and low-speed expansion port 914. The low-speed expansion port, which may include various communication ports (e.g., USB, bluetooth, ethernet, wireless ethernet), may be coupled to one or more input/output devices, such as a keyboard, pointing device, scanner, or networking device, such as a switch or router, for example, through a network adapter.
As shown in this figure, computing device 900 may be implemented in a number of different forms. For example, it may be implemented as a standard server 920 or multiple times in a group of such servers. It may also be implemented as part of a rack server system 924. Additionally, it may be implemented in a personal computer (such as laptop 922). Alternatively, components from computing device 900 may be combined with other components in a mobile device (not shown), such as device 950. Each of such devices may contain one or more of computing devices 900, 950, and an entire system may be made up of multiple computing devices 900, 950 communicating with each other.
The processor 952 is capable of executing instructions within the computing device 950, including instructions stored in the memory 964. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 950 (such as control of user interfaces, applications run by device 950, and wireless communication by device 950).
The memory 964 stores information within the computing device 950. The memory 964 can be implemented as one or more of a computer-readable medium(s), a volatile memory unit(s), or a non-volatile memory unit(s). Expansion memory 984 may also be provided and connected to apparatus 950 through expansion interface 982, which expansion interface 982 may include, for example, a SIMM (Single in line memory Module) card interface. Such expansion memory 984 may provide additional storage space for device 950, or may also store applications or other information for device 950. In particular, expansion memory 984 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 984 may serve as a security module for device 950 and may be programmed with instructions that permit secure use of device 950. In addition, secure applications may be provided via the SIMM card with additional information, such as placing identifying information on the SIMM card in a non-offensive manner.
As discussed below, the memory may include, for example, flash memory and/or NVRAM memory. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer-or machine-readable medium, such as the memory 964, expansion memory 984, or memory on processor 952, that may be received, for example, over transceiver 968 or external interface 962.
The device 950 may communicate wirelessly through the communication interface 966, which communication interface 966 may include digital signal processing circuitry if necessary. Communication interface 966 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 968. Additionally, short-range communication may occur, such as using a bluetooth, WiFi, or other such transceiver (not shown). Additionally, GPS (global positioning system) receiver module 980 may provide additional navigation-and location-related wireless data (which may be used as appropriate by applications running on device 950) to device 950.
As shown in this figure, computing device 950 may be implemented in many different forms. For example, the computing device 950 may be implemented as a cellular telephone 980. Computing device 950 may also be implemented as part of a smart phone 982, personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can also be used to provide for interaction with the user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing device that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), and the internet.
The computing device can include a client and a server. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Claims (15)
1. A computer-implemented method (1000) for determining an abnormal technical state of a technical system (200), comprising:
receiving (1100) a plurality of signals from the technical system (200), wherein each signal (S1 to Sn) is sampled over time and reflects a technical state of at least one system component;
for each signal (S1) having associated high and low alarm thresholds obtained from the alarm management system (300), calculating (1300), at each sampling time point, a univariate distance from its associated alarm threshold (H1/L1) as a maximum of the distance between the value of the respective signal and its associated alarm threshold to quantify the degree of anomaly for the respective at least one system component;
calculating (1400), at each sampling time point, an Aggregate Anomaly Indicator (AAI) reflecting the technical state of the entire technical system (200) based on the univariate distances at the respective sampling time point; and
providing (1500) an operator (10) a comparison of the Aggregate Anomaly Indicator (AAI) with a predetermined anomaly threshold value which ensures with a given probability that an aggregate anomaly indicator value reflects normal operation of the technical system below the anomaly threshold value, wherein the abnormal technical state is determined when the aggregate anomaly indicator exceeds the anomaly threshold value.
2. The method according to claim 1, wherein the anomaly threshold value is determined by using a cumulative distribution function of the Aggregate Anomaly Indicators (AAI) during normal operation of the technical system (200).
3. The method according to claim 1 or 2, wherein the aggregate anomaly indicator at a particular sampling time point is calculated as:
euclidian distance based on the univariate distance of the corresponding signal and the total number of signals, or
A weighted Euclidian distance based on the univariate distance of the respective signal and a total number of signals, wherein each univariate distance contribution is weighted with a weighting factor corresponding to a severity of an alarm associated with the respective signal as defined in the alarm management system.
4. The method according to any of the preceding claims, wherein, prior to the calculating step (1300, 1400), a steady State Detection Algorithm (SDA) is used for determining (1250) whether the technical system (200) is operating in a steady state process, and the calculating step (1300, 1400) is suppressed when the process is not in a steady state.
5. The method of any of the preceding claims, further comprising:
further providing the operator with a subset of the univariate distances (TOPm) at the respective sampling point in time, wherein the subset relates to such univariate distances having the highest contribution to the increase of the aggregate anomaly measure, wherein the size m of the subset (TOPm) is predefined.
6. The method according to any of the preceding claims, wherein the univariate distance to a particular signal at a particular sampling time point is calculated so as to: if the sampled signal value is between the low alarm threshold and the high alarm threshold, then the distance value is between 0 and 1; a distance value is 1 if the sampled signal value is less than or equal to the low alarm threshold or greater than or equal to the high alarm threshold; and if the sampled signal value corresponds to a predefined parameter value reflecting normal operation, the distance value is 0.
7. The method of any of claims 1 to 5, wherein the univariate distance for a particular signal at a particular sampling time point is smoothed by exponential smoothing.
8. The method of claim 7, wherein the univariate distance for a particular signal at a particular sampling time point defines by introduction a normal range of the signala l , a 2 ]Is calculated, wherein the upper limit of the intervala 2 Less than the corresponding high alarm thresholdx h And lower limit of the intervala 2 Greater than the corresponding low alarm thresholdx l So as to: if the sampled signal value is within the interval, the distance value is 0; for thex(t)<a l A distance value ofAnd tox(t)>a2A distance value ofWherein, in the step (A),a>1。
9. the method according to any one of the preceding claims, wherein a component hierarchy (400) of the technical system defines a plurality of functional blocks (210, 220) as child nodes of the technical system (200), wherein each functional block (210, 220) comprises a plurality of child nodes comprising further functional blocks (230) and/or system components (211, 212, 221), the method further comprising:
calculating (1450) an aggregated Block Anomaly Indicator (BAI) for a particular function block at each sampling time point based on a subset of univariate distances associated with the particular function block at the respective sampling time point, wherein the Block Anomaly Indicator (BAI) reflects the technical state of the function block; and
providing (1550) the operator a comparison of the Block Anomaly Indicator (BAI) with a predetermined block anomaly threshold value that ensures with a given probability that an aggregate block anomaly indicator value reflects normal operation of the particular functional block when below the block anomaly threshold value.
10. The method of any of the preceding claims, wherein a particular state of technology parameter is represented by a plurality of sensor signals that provide redundant information in specifying the particular state of technology, the method further comprising:
aggregating the univariate distances associated with the plurality of sensor signals to provide a robust univariate distance for the particular technology state parameter.
11. A computer program product provided for determining an abnormal technical state of a technical system (200), the computer program product comprising instructions which, when loaded into a memory of a computer system and being executed by at least one processor of the computer system, cause the computer system to carry out the method steps according to any one of claims 1 to 10.
12. A computer system (100) for determining an abnormal technical state of a technical system (200), the computer system (100) comprising:
an interface (110), the interface (110):
configured to receive a plurality of signals from the technical system (200), wherein each signal (S1 to Sn) is sampled over time and reflects the technical state of at least one system component; and
configured to retrieve from an alarm management system (300) associated with the technical system (200) high alarm thresholds (H1 to Hn) and low alarm thresholds (L1 to Ln) associated with respective received signals (S1 to Sn), wherein signal values of particular signals within a range between the associated high alarm thresholds and the associated low alarm thresholds reflect normal operation of the respective at least one system component; and
a data processor (120), the data processor (120) being:
configured to calculate, for each signal (S1) having an associated alarm threshold, at each sampling time point, a univariate distance from its associated alarm threshold (H1/L1) as a maximum of the distances between the value of the respective signal and its associated alarm threshold to quantify the degree of anomaly for the respective at least one system component; and
configured to calculate, at each sampling time point, an Aggregate Anomaly Indicator (AAI) reflecting the technical state of the entire technical system (200) based on the univariate distance at the respective sampling time point; and
a user interface component (130), the user interface component (130) configured to provide a comparison of the Aggregate Anomaly Indicator (AAI) with a predetermined anomaly threshold value (AAT) to an operator (10), the anomaly threshold value ensuring with a given probability that an aggregate anomaly indicator value reflects normal operation of the technical system below the anomaly threshold value, wherein the abnormal technical state is determined when the aggregate anomaly indicator exceeds the anomaly threshold value.
13. The computer system of claim 12, wherein the data processor (120) further comprises:
a computation switch (123) utilizing a steady State Detection Algorithm (SDA), the computation switch (123) configured to determine whether the technical system (200) is operating in a steady state process and to inhibit subsequent computation steps when the process is not in a steady state.
14. The computer system of claim 12 or 13, wherein a component hierarchy of the technical system defines a plurality of function blocks as child nodes of the technical system (200), wherein each function block comprises a plurality of child nodes comprising further function blocks and/or system components,
the processor (120) is further configured to calculate, at each sampling time point, an aggregated Block Anomaly Indicator (BAI) for a particular function block based on a subset of univariate distances associated with the particular function block at the respective sampling time point, wherein the Block Anomaly Indicator (BAI) reflects the technology state of the function block; and
the user interface (130) is further configured to provide the operator with a comparison of the Block Anomaly Indicator (BAI) with a predetermined block anomaly threshold value that ensures with a given probability that an aggregate block anomaly indicator value reflects normal operation of the particular functional block when below the block anomaly threshold value.
15. The computer system of any of claims 12 to 14, the user interface (130) further configured to:
providing the operator with a subset of the univariate distances (TOPm) at the respective sampling point in time, wherein the subset relates to such distances having the highest contribution to the increase of the aggregate anomaly measure, wherein the size m of the subset (TOPm) is predefined.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18196241.6 | 2018-09-24 | ||
EP18196241.6A EP3627263B8 (en) | 2018-09-24 | 2018-09-24 | System and methods monitoring the technical status of technical equipment |
PCT/EP2019/073957 WO2020064309A1 (en) | 2018-09-24 | 2019-09-09 | System and methods monitoring the technical status of technical equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112740133A true CN112740133A (en) | 2021-04-30 |
Family
ID=63683658
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980062659.0A Pending CN112740133A (en) | 2018-09-24 | 2019-09-09 | System and method for monitoring the technical state of a technical installation |
Country Status (4)
Country | Link |
---|---|
US (1) | US12019432B2 (en) |
EP (1) | EP3627263B8 (en) |
CN (1) | CN112740133A (en) |
WO (1) | WO2020064309A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4268205A1 (en) * | 2020-12-31 | 2023-11-01 | Schneider Electric Systems USA, Inc. | Systems and methods for providing operator variation analysis for transient operation of continuous or batch wise continuous processes |
CN114484732B (en) * | 2022-01-14 | 2023-06-02 | 南京信息工程大学 | Air conditioning unit sensor fault diagnosis method based on voting network |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1950778A (en) * | 2004-03-09 | 2007-04-18 | Ip锁有限公司 | Database user behavior monitor system and method |
CN101268427A (en) * | 2005-09-20 | 2008-09-17 | 费舍-柔斯芒特系统股份有限公司 | Aggregation of asset use indices within a process plant |
US20100185414A1 (en) * | 2009-01-16 | 2010-07-22 | Hitachi Cable,Ltd. | Abnormality detection method and abnormality detection system for operating body |
CN102117443A (en) * | 2010-01-05 | 2011-07-06 | 国际商业机器公司 | Analyzing anticipated value and effort in using cloud computing to process a specified workload |
CN102200769A (en) * | 2011-03-30 | 2011-09-28 | 北京三博中自科技有限公司 | Real-time alarm system for industrial enterprise and method thereof |
CN102231081A (en) * | 2011-06-14 | 2011-11-02 | 北京三博中自科技有限公司 | Energy utilization state diagnosis method for process industrial equipment |
CN102539154A (en) * | 2011-10-16 | 2012-07-04 | 浙江吉利汽车研究院有限公司 | Engine fault diagnosis method and device based on exhaust noise vector quantitative analysis |
CN102763047A (en) * | 2009-12-19 | 2012-10-31 | 诺沃皮尼奥内有限公司 | Method and system for diagnosing compressors |
US8311973B1 (en) * | 2011-09-24 | 2012-11-13 | Zadeh Lotfi A | Methods and systems for applications for Z-numbers |
CN102830662A (en) * | 2011-06-14 | 2012-12-19 | 北京三博中自科技有限公司 | Monitoring system and method of flow industrial pipe network system |
CN103513983A (en) * | 2012-06-25 | 2014-01-15 | 国际商业机器公司 | Method and system for predictive alert threshold determination tool |
US20140097952A1 (en) * | 2012-10-10 | 2014-04-10 | General Electric Company | Systems and methods for comprehensive alarm management |
US20150095100A1 (en) * | 2013-09-30 | 2015-04-02 | Ge Oil & Gas Esp, Inc. | System and Method for Integrated Risk and Health Management of Electric Submersible Pumping Systems |
CN104598995A (en) * | 2015-01-27 | 2015-05-06 | 四川大学 | Regional water resource allocation bi-level decision-making optimization method based on water right |
CN205880599U (en) * | 2016-05-05 | 2017-01-11 | 华电国际电力股份有限公司技术服务中心 | Unit exception monitored control system |
CN106775929A (en) * | 2016-11-25 | 2017-05-31 | 中国科学院信息工程研究所 | A kind of virtual platform safety monitoring method and system |
US20170214706A1 (en) * | 2010-11-18 | 2017-07-27 | Nant Holdings Ip, Llc | Vector-based anomaly detection |
US20170331844A1 (en) * | 2016-05-13 | 2017-11-16 | Sikorsky Aircraft Corporation | Systems and methods for assessing airframe health |
CN207123598U (en) * | 2016-12-02 | 2018-03-20 | Abb瑞士股份有限公司 | Configurable state monitoring apparatus |
KR20180042483A (en) * | 2016-10-17 | 2018-04-26 | 고려대학교 산학협력단 | Method and appratus for detecting anomaly of vehicle based on euclidean distance measure |
CN108445865A (en) * | 2018-03-08 | 2018-08-24 | 云南电网有限责任公司电力科学研究院 | A kind of method and system for the major-minor equipment dynamic alert of fired power generating unit |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0515726D0 (en) | 2005-07-30 | 2005-09-07 | Curvaceous Software Ltd | Multi-variable operations |
GB0717991D0 (en) | 2007-09-15 | 2007-10-24 | Curvaceous Software Ltd | Multi-variable operations |
CN109213654B (en) * | 2018-07-05 | 2023-01-03 | 北京奇艺世纪科技有限公司 | Anomaly detection method and device |
-
2018
- 2018-09-24 EP EP18196241.6A patent/EP3627263B8/en active Active
-
2019
- 2019-09-09 WO PCT/EP2019/073957 patent/WO2020064309A1/en active Application Filing
- 2019-09-09 CN CN201980062659.0A patent/CN112740133A/en active Pending
-
2021
- 2021-03-22 US US17/207,854 patent/US12019432B2/en active Active
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1950778A (en) * | 2004-03-09 | 2007-04-18 | Ip锁有限公司 | Database user behavior monitor system and method |
CN101268427A (en) * | 2005-09-20 | 2008-09-17 | 费舍-柔斯芒特系统股份有限公司 | Aggregation of asset use indices within a process plant |
US20100185414A1 (en) * | 2009-01-16 | 2010-07-22 | Hitachi Cable,Ltd. | Abnormality detection method and abnormality detection system for operating body |
CN102763047A (en) * | 2009-12-19 | 2012-10-31 | 诺沃皮尼奥内有限公司 | Method and system for diagnosing compressors |
CN102117443A (en) * | 2010-01-05 | 2011-07-06 | 国际商业机器公司 | Analyzing anticipated value and effort in using cloud computing to process a specified workload |
US20170214706A1 (en) * | 2010-11-18 | 2017-07-27 | Nant Holdings Ip, Llc | Vector-based anomaly detection |
CN102200769A (en) * | 2011-03-30 | 2011-09-28 | 北京三博中自科技有限公司 | Real-time alarm system for industrial enterprise and method thereof |
CN102231081A (en) * | 2011-06-14 | 2011-11-02 | 北京三博中自科技有限公司 | Energy utilization state diagnosis method for process industrial equipment |
CN102830662A (en) * | 2011-06-14 | 2012-12-19 | 北京三博中自科技有限公司 | Monitoring system and method of flow industrial pipe network system |
US8311973B1 (en) * | 2011-09-24 | 2012-11-13 | Zadeh Lotfi A | Methods and systems for applications for Z-numbers |
CN102539154A (en) * | 2011-10-16 | 2012-07-04 | 浙江吉利汽车研究院有限公司 | Engine fault diagnosis method and device based on exhaust noise vector quantitative analysis |
CN103513983A (en) * | 2012-06-25 | 2014-01-15 | 国际商业机器公司 | Method and system for predictive alert threshold determination tool |
US20140097952A1 (en) * | 2012-10-10 | 2014-04-10 | General Electric Company | Systems and methods for comprehensive alarm management |
US20150095100A1 (en) * | 2013-09-30 | 2015-04-02 | Ge Oil & Gas Esp, Inc. | System and Method for Integrated Risk and Health Management of Electric Submersible Pumping Systems |
CN105765475A (en) * | 2013-09-30 | 2016-07-13 | 通用电气石油和天然气Esp公司 | System and method for integrated risk and health management of electric submersible pumping systems |
CN104598995A (en) * | 2015-01-27 | 2015-05-06 | 四川大学 | Regional water resource allocation bi-level decision-making optimization method based on water right |
CN205880599U (en) * | 2016-05-05 | 2017-01-11 | 华电国际电力股份有限公司技术服务中心 | Unit exception monitored control system |
US20170331844A1 (en) * | 2016-05-13 | 2017-11-16 | Sikorsky Aircraft Corporation | Systems and methods for assessing airframe health |
KR20180042483A (en) * | 2016-10-17 | 2018-04-26 | 고려대학교 산학협력단 | Method and appratus for detecting anomaly of vehicle based on euclidean distance measure |
CN106775929A (en) * | 2016-11-25 | 2017-05-31 | 中国科学院信息工程研究所 | A kind of virtual platform safety monitoring method and system |
CN207123598U (en) * | 2016-12-02 | 2018-03-20 | Abb瑞士股份有限公司 | Configurable state monitoring apparatus |
CN108445865A (en) * | 2018-03-08 | 2018-08-24 | 云南电网有限责任公司电力科学研究院 | A kind of method and system for the major-minor equipment dynamic alert of fired power generating unit |
Non-Patent Citations (2)
Title |
---|
申宇皓;孟晨;高聪杰;傅振华;李健;: "状态监测与故障诊断的Petri网建模与分析", 《计算机测量与控制》, vol. 17, no. 05, 25 May 2009 (2009-05-25), pages 826 - 829 * |
陆云松;王福利;贾明兴;: "基于定性仿真和模糊知识的离心式压缩机排气量不足原因诊断", 《自动化学报》, vol. 41, no. 11, 24 July 2015 (2015-07-24), pages 1867 - 1876 * |
Also Published As
Publication number | Publication date |
---|---|
EP3627263B1 (en) | 2021-09-01 |
EP3627263A1 (en) | 2020-03-25 |
US20210209189A1 (en) | 2021-07-08 |
WO2020064309A1 (en) | 2020-04-02 |
EP3627263B8 (en) | 2021-11-17 |
US12019432B2 (en) | 2024-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9405291B2 (en) | Systems and methods to monitor an asset in an operating process unit | |
US10204226B2 (en) | Feature and boundary tuning for threat detection in industrial asset control system | |
US10192170B2 (en) | System and methods for automated plant asset failure detection | |
US9998487B2 (en) | Domain level threat detection for industrial asset control system | |
US7398184B1 (en) | Analyzing equipment performance and optimizing operating costs | |
US20180137277A1 (en) | Dynamic normalization of monitoring node data for threat detection in industrial asset control system | |
JP2014032671A (en) | Systems and methods to monitor pump cavitation | |
US10901406B2 (en) | Method of monitoring and controlling an industrial process, and a process control system | |
CN111090939B (en) | Early warning method and system for abnormal working condition of petrochemical device | |
US20150241304A1 (en) | Method for the computer-assisted monitoring of the operation of a technical system, particularly of an electrical energy-generating installation | |
US11916940B2 (en) | Attack detection and localization with adaptive thresholding | |
US12019432B2 (en) | System and methods monitoring the technical status of technical equipment | |
KR102062992B1 (en) | Systems and methods for determining abnormal conditions based on plant alarms or symptom and change of major operating variables | |
WO2018136841A1 (en) | Expert-augmented machine learning for condition monitoring | |
KR20150027178A (en) | Data classification method based on correlation, and a computer-readable storege medium having program to perform the same | |
CN116520798A (en) | Method, device, equipment and storage medium for diagnosing fault of regulating door | |
US20180349816A1 (en) | Apparatus And Method For Dynamic Risk Assessment | |
AU2022202976A1 (en) | Artificial intelligence alarm management | |
CN113051700A (en) | Equipment reliability monitoring method and device | |
CN118709855A (en) | Power system data anomaly detection method and device, electronic equipment and storage medium | |
KR20220079683A (en) | Protection of industrial production from sophisticated attacks | |
WO2024072729A1 (en) | A general reinforcement learning framework for process monitoring and anomaly/ fault detection | |
CN113052320A (en) | Equipment safety monitoring method and device | |
TW202419998A (en) | System and method for estimating false alarm detection to optimize early warning management | |
CN118584878A (en) | Data acquisition and control system of digital air compression station |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |