[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Exact and Numerical Analysis of the Pantograph Delay Differential Equation via the Homotopy Perturbation Method
Previous Article in Journal
Towards Future Internet: The Metaverse Perspective for Diverse Industrial Applications
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Actuator Enablement Attacks by Petri Nets in Supervisory Control Systems

1
College of Computer Science and Technology, Xi’an University of Science and Technology, Xi’an 710054, China
2
School of Mechano-Electronic Engineering, Xidian University, Xi’an 710071, China
3
School of Foreign Languages, Changji College, Changji 831100, China
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(4), 943; https://doi.org/10.3390/math11040943
Submission received: 2 January 2023 / Revised: 25 January 2023 / Accepted: 28 January 2023 / Published: 13 February 2023
Figure 1
<p>The closed-loop control system architecture.</p> ">
Figure 2
<p>The control system architecture.</p> ">
Figure 3
<p>The flowchart of AE-attack detection.</p> ">
Figure 4
<p>The plant <span class="html-italic">G</span>.</p> ">
Figure 5
<p>The supervisor <span class="html-italic">H</span>.</p> ">
Figure 6
<p><math display="inline"><semantics> <msub> <mi>G</mi> <mi>M</mi> </msub> </semantics></math>: the closed-loop system under attacks.</p> ">
Figure 7
<p><math display="inline"><semantics> <msub> <mi>G</mi> <mi mathvariant="script">B</mi> </msub> </semantics></math>: the basis attack model.</p> ">
Figure 8
<p>Graphic representation of AE-safe controllability.</p> ">
Figure 9
<p>Label automaton <math display="inline"><semantics> <msub> <mi>A</mi> <mi>ξ</mi> </msub> </semantics></math>.</p> ">
Figure 10
<p>Label basis attack model <math display="inline"><semantics> <msub> <mi>G</mi> <mi>ξ</mi> </msub> </semantics></math>.</p> ">
Figure 11
<p>(<b>a</b>) The system model under normal behavior <math display="inline"><semantics> <msub> <mi>G</mi> <mi>N</mi> </msub> </semantics></math> and (<b>b</b>) the system model under attacked behavior <math display="inline"><semantics> <msub> <mi>G</mi> <mi>F</mi> </msub> </semantics></math>.</p> ">
Figure 12
<p>The basis verifier <math display="inline"><semantics> <msub> <mi>G</mi> <mi>V</mi> </msub> </semantics></math>.</p> ">
Figure 13
<p>The basis verifier under attacks <math display="inline"><semantics> <msubsup> <mi>G</mi> <mrow> <mi>V</mi> </mrow> <mrow> <mi>c</mi> <mi>d</mi> </mrow> </msubsup> </semantics></math>.</p> ">
Figure 14
<p>The combined basis verifier <math display="inline"><semantics> <msub> <mi>G</mi> <mi>T</mi> </msub> </semantics></math>.</p> ">
Figure 15
<p>Petri net <span class="html-italic">G</span>.</p> ">
Figure 16
<p>Supervisor <span class="html-italic">H</span>.</p> ">
Figure 17
<p>The closed-loop system under attacks <math display="inline"><semantics> <msub> <mi>G</mi> <mi>M</mi> </msub> </semantics></math>.</p> ">
Figure 18
<p>The basis attack model <math display="inline"><semantics> <msub> <mi>G</mi> <mi mathvariant="script">B</mi> </msub> </semantics></math>.</p> ">
Figure 19
<p>A part of the basis diagnoser <math display="inline"><semantics> <msub> <mi>G</mi> <mi>D</mi> </msub> </semantics></math>.</p> ">
Versions Notes

Abstract

:
The feedback control system with network-connected components is vulnerable to cyberattacks. We study a problem of attack detection in supervisory control of discrete-event systems. The scenario of a system subjected to actuator enablement attacks is considered in this article. We also consider that some unsafe places that should be protected from an attacker exist in the system, and some controllable events that are disabled by a supervisor might be re-enabled by an attacker. This article proposes a defense strategy to detect actuator enablement attacks and disable all controllable events after detecting an attack. We design algorithmic procedures to determine whether the system can be protected against damage caused by actuator enablement attacks, where the damage is predefined as a set of “unsafe” places. In this way, the system property is called “AE-safe controllability”. The safe controllability can be verified by using a basis diagnoser or a basis verifier. Finally, we explain the approach with a cargo system example.

1. Introduction

The cyber–physical system (CPS) is an intelligent system that integrates communication, control and computing. Safe and supervisory control against potential attacks in cyber–physical systems has drawn extensive attention in recent years [1,2,3,4,5,6,7]. To better describe system behaviors, cyber–physical systems are often abstracted as discrete-event systems (DESs). Due to the significance of security concerns in cyber–physical systems, it is necessary to consider attack detection under the framework with supervisory control in discrete-event systems [8,9].
In this article, we explore the issue based on the closed-loop control system shown in Figure 1, where the supervisor controls the system through actuators and sensors. However, the actuators and sensors are often vulnerable to attacks in the process of delivering signals, and attackers can potentially alter the transmitted signals. The object of our study is a discrete-event system driven by events, where the supervisor disables some actuator events according to a given specification. We study the intrusion detection of actuator enablement attacks (AE-attacks) under a closed-loop control system. Specifically, some actuators in the system are vulnerable to intrusion, and an attacker indirectly causes the system to enter an unsafe state by changing control actions of the vulnerable actuators from “disabled” to “enabled”.
The study of attack detection in the context of DESs can be traced back as far as the work in [10]. The work in [11] considers a problem of synthesizing a supervisor under removal attacks and sensor insertion attacks. The approach in [12] considers the detection and mitigation of actuator and sensor attacks. In [13], the authors discuss the robust control problem under a sensor replacement attack. The work in [14] investigates integrated sensor deception attacks in the context of DESs. The work in [15,16] focuses on intrusion detection in which the supervisor determines the presence of an intruder by diagnosing faulty behavior in the system. The study in [17,18] presents the issue of supervisory control of DESs under malicious attacks using labeled Petri nets (LPN). In [19], the method of constructing a resilient automaton is proposed by introducing the safety level of the system, which transforms the resilient supervisory synthesis problem into a supervisory control problem. The study in [20] proposes a new attacks mitigation strategy that maximizes the scope of the normal specification while ensuring security. In [21], Rashidinejad et al. outline the existing methods to prevent damage from cyberattacks in cyber–physical systems. The work in [22] investigates joint sensor-actuator network attacks in DESs, defines upper and lower bounds on the language to describe nondeterministic behavior, and successfully solves the issue of supervisory control under network attacks.
The work in [23] proposes a generic attack detection framework with respect to four different types of cyberattacks in supervisory control systems. An automaton model is used to characterize behaviors of systems under attack. Essentially, the use of an automaton model for the description of systems has worked well. However, with the scale of DESs becoming larger and more complex, the state space of the system grows exponentially with the increase of scale, i.e., there is an issue of “state space explosion”. The large scale of the system leads to the increase of the probability of failure, and the “state space explosion” also increases the difficulty of fault diagnosis [24]. The aim of our work is to compensate for this drawback and improve the existing detection methods.
The fundamental framework of supervisory control systems in [23] is adopted. However, in contrast to the model in [23], we describe the behavior of a system using a Petri net and construct a basis attack model. More specifically, we replace an automaton with a Petri net and establish a supervisory control system for attack detection of AE-attacks. A basis reachability graph (BRG) is proposed in [25] in which the transitions are divided into two parts, and the net behavior is described by a subset of reachable markings. Our approach is motivated by the research in [25]. However, the work in [25] only classifies events as observable and unobservable. In this article, we use Petri nets to address the issue of attack detection with the AE-attacks, which has never been addressed in the literature. The complexity class of the attack detection problem using Petri nets belongs to the NP-complete problem. Finally, we indicate that only the AE-attacks are considered in this article to present our main results.
Based on the above motivation, this article investigates the detection of AE-attacks by using Petri nets in a control system. In a supervisory control system, there are places that are unsafe or critical and that should be prevented from being accessed externally, and our goal is to build an attack model by using a Petri net. If the supervisor can block access to unsafe places after an attack, then the system satisfies safe controllability. Note that the traditional framework for detecting AE-attacks for the system using automata has the same purpose as what we do. However, the BRG alleviates the problem of state explosion and is an improvement to the original approach.
The main contributions of the article are outlined as follows:
(1) We modify the existing approach that originally uses automata to describe the system behavior. In the article, we use Petri nets instead of automata. Compared with automata, Petri nets can describe the system behavior in a more compact structure without exhausting the entire state space. Moreover, we use semi-structural approaches to reduce the computational burden in the attack detection problem.
(2) A new approach of constructing the BRG is proposed, where the explanation vector is computed from controllable events, and uncontrollable events are omitted from the state space, making it more efficient to analyze system behavior after an attack has occurred.
This article is divided into five sections. The necessary fundamental knowledge is recalled in Section 2. The notion of AE-attacks and the approach to construct a basis attack model are outlined in Section 3. Section 4 presents the notion of AE-safe controllability and gives algorithms for analyzing AE-safe controllability. Section 5 is mainly concerned with an example of cargo delivery to explain the approach. Section 6 concludes the whole article.

2. Preliminaries

2.1. Basics of Petri Nets

A Petri net (or Petri net structure) is a four tuple N = P , T , F , W , where P is a finite set of places, T is a finite set of transitions, P T = Ø and P T Ø . We denote by F P × T T × P the set of arcs from places to transitions and from transitions to places in the graph. W: P × T T × P N is a mapping that attributes a weight to each arc, where N is a set of non-negative integers. We denote as t = p P p , t F and t = p P t , p F the sets of input places and output places of a transition t, respectively. Similarly, we define p = t T t , p F and p = t T p , t F . The marking M of a Petri net N = P , T , F , W is a mapping from P to N .
A transition t is said to be enabled at a marking M if p t , M p W p , t , denoted as M t . When firing an enabled transition t, W p , t tokens are removed from every input place p of t, and W t , p tokens are added to every output place p of t, then generating a new marking M such that p P , M p = M p W p , t + W t , p . Firing t at marking M reaches marking M , denoted as M t M . The incidence matrix C of N is a P × T integer matrix with C p , t = W t , p W p , t . According to the firing rule of a transition, a transition t is enabled at M, and firing t can reach a marking M = M + C · , t . Consequently, for any finite transition sequence σ of a Petri net N , M 0 , we write M 0 σ M to represent that the sequence of transitions σ is enabled at M 0 and after firing of σ yields M . The vector σ is the Parikh vector of σ T * [26], then
M = M 0 + C σ .
A Petri net is denoted as G = N , M 0 , where M 0 is the initial marking. We use R G to represent the set of all markings that are reached by N from M 0 . A net N is bounded if there is an integer K N such that M R G and p P , M p K holds.

2.2. Basis Markings and Basis Reachability Graph

We review several results on basis markings presented in [25,27]. In a basis partition T c , T u c , a set T is partitioned into the controllable transition set T c , and the uncontrollable transition set T u c . C u c is the incidence matrix restricted to P × T u c and a T u c -induced subnet is a net P , T u c , F , W , where F and W are the restrictions of F and W, respectively. We denote | T c | = n c and | T u c | = n u c .
Definition 1. 
Given a marking M and a controllable transition t T c , we define
M , t = σ T u c * M σ M , M P r e · , t
as the set of explanations of t at M, and we define
Y M , t = y σ N n u c σ M , t
as the set of explanation vectors (or e-vectors).
Therefore, M , t is a set of uncontrollable sequences whose firing at marking M can enable transition t. Y M , t consists of firing vectors associated with the sequences from M , t .
Definition 2. 
Given a marking M and a transition t T c , we define
m i n M , t = σ M , t σ M , t : y σ y σ
as the set of minimal explanations of t at M, and we define
Y m i n M , t = y σ N n u c σ m i n M , t
as the corresponding set of minimal e-vectors.
With the above definitions, a basis marking can be defined as follows. Given a Petri net G = N , M 0 with its reachability set R G , the set of basis markings M is a subset of R G satisfying: (1) M 0 M ; (2) M M , t T c , y u c Y m i n M , t , it holds M M , where M = M + C u c · y u c + C · , t .
Briefly, the set of basis markings consists of two parts: an initial marking and the markings reachable from M 0 by firing each controllable transition together with its minimal explanation. All basis markings can be obtained by iterative computation starting from the initial marking M 0 .
The BRG generated by a Petri net is a quadruple, denoted by B = M , T , δ , M 0 , representing a finite state automaton comprised of all basis markings, where: (1) the set M represents all basis markings; (2) the set T represents the set of transitions t T c ; (3) the transition function is denoted as δ : M × T M , i.e., δ M 1 , t = M 2 , M 2 = M 1 + C u c · y u c + C · , t , y u c Y m i n M 1 , t , the function δ can be extended to M × T * M , where T * is the Kleene closure of T [26]; and (4) the state M 0 is the initial marking.

2.3. Supervisory Control Theory

It is assumed that the plant is modeled by a Petri net G = N , M 0 . Assume that T = T o ˙ T u o , where T o and T u o represent the sets of observable and unobservable transitions, respectively. Similarly, T = T c ˙ T u c , where T c and T u c are the sets of controllable and uncontrollable transitions, respectively. When the behavior of G needs to be restricted for satisfying a specification K , we introduce a feedback control loop as well as a supervisor. The language generated by G is defined by L G : = s T * : M 0 s , which is a set of strings. The natural projection P o : T * T o * is defined such that: (1) P o ε = ε ; (2) P o ω = ω if ω T o ; (3) P o ω = ε if ω T u o ; and (4) P o s ω = P o s P o ω for s T * and ω T , where ε denotes the empty word. Events of the plant are enabled or disabled dynamically by the supervisor, limiting the closed-loop behavior within an acceptable language. Generally, the plant is under partial observation; thus, the supervisor decides to disable certain events on the basis of the projections from strings that are generated by G. To be more specific, a partially observed supervisor is represented as a mapping S P : P o L G 2 T ; the supervisor makes a decision based on P o s for any string s generated by G. This kind of supervisor is called a P -supervisor. Consequently, when two different strings s 1 and s 2 have the same projection, they will cause the identical control action.
A sublanguage K of L G is considered controllable with respect to L G and T u c if K ¯ T u c L G K ¯ . Moreover, K is observable for L G , P o and T c if for all s K ¯ and ω T c , s ω K ¯ and s ω L G implies that P o 1 P o s ω K ¯ = Ø . Observability and controllability are essential and sufficient for the presence of a P -supervisor who performs the specification K [28].

3. Actuator Enablement Attacks

3.1. Attack Definition and Modeling

We graphically depict a control system architecture under attacks in Figure 2. The control system is a plant G controlled by S P . The supervisor monitors the plant events through the projection P o generated by the system. Without considering attacks, the closed-loop behavior L S P / G = K ¯ , in which K is an observable with controllable sublanguage of L G . S P is a “nominal” supervisor that is designed to enforce the specification K .
Vulnerable actuators from the supervisor to the plant are frequently attacked. We use T c , v to denote the vulnerable actuator events, which is a subset of all actuator controllable events T c . Block A M in Figure 2 represents an attacker model in which the identical observable events can be observed by P o , and the control actions of the supervisor on vulnerable actuators can be overwritten. In fact, the controllable action affecting plant G is a combination of the controllable behavior of supervisor S P and attacker A M on event set T c . The attack detection module is indicated as D A . It can also receive the occurrence of observable events by P o , as the way to infer whether an AE attack has occurred and inform the supervisor S P when an attack is detected. F M indicates that the system enters a “defense module” when the supervisor S P receives the message that the system is attacked. It will disable every controllable event. In this case, the system enters the “defense module”, which corresponds to “expect the worst and put safety first”. Block U M denotes that the system enters unmanageable conditions after being attacked.
The main goal of this article is to build an accurate model for monitoring AE-attacks in the system and to understand the impact of AE-attacks. First, the system is modeled by using a Petri net. Then, basis markings are calculated to obtain a basis attack model, finally a basis diagnoser and a basis verifier are constructed, which are used to judge whether the system satisfies AE-safe controllability. Both methods have their advantages. The flowchart of AE-attack detection is shown in Figure 3.
We consider a closed-loop system with vulnerable actuators. The system is modeled as a Petri net G = N , M 0 . To represent the events occurring in a plant, we use the transitions in a Petri net instead, i.e., each event is referred to a transition of a Petri net in the article. When a string s contains an event ω , we write ω s . Equivalently, when a string s contains an event in T c , we write T c s . The active event set at place p in G is denoted by Γ G p = t T : p , t F .
In particular, the supervisor disables some actuator events to achieve the specification. Then, the attacker intrudes into certain actuators and re-enables these events, overriding the supervisor’s control behaviors. The attacker’s aim is to make the system arrive at an unsafe state and be damaged through the events that it enables. This type of attack is called an AE attack.

3.2. The Basis Attack Model under AE-Attacks

We consider a Petri net G = N , M 0 , a pair π = T c , T u c is called a basis partition of T, if 1 T c T , T u c = T \ T c , and 2 the T u c -induced subnet is acyclic. If not, the system will become unstable. In this basis partition, the sets T c and T u c are called the sets of controllable transitions and uncontrollable transitions, respectively. The controllable events (transitions) may be disabled or enabled by S P . The uncontrollable events are not affected by the supervisor’s actions.
We use T c , v a = ω a : ω T c , v to denote the actuator events intruded by an attacker. We refer to it as the attacked actuator event set and define T a = T T c , v a . More precisely, ω a denotes the occurrence of ω that is disabled in the system by the supervisor and then enabled again by the attacker. The dilation operation is a mapping D: T * 2 T a * with some properties such as: (1) D ε = ε , (2) D ω = ω if ω T \ T c , v , (3) D ω = ω , ω a if ω T c , v , and (4) D s ω = D s D ω where s T * and ω T . We also define the compression operator C : T a T that has the following operating characteristics: (1)  C ω = ω if ω T , and (2) C ω a = ω if ω a T c , v a . The operator of compression can be extended to C : T a * T * .
Assumption 1. 
The Petri net system used in this paper is a bounded net.
Assumption 1 means that the method of constructing the basis attack model in this article is applied to a bounded net, since the BRG is finite for a bounded Petri net. Under the condition of bounded nets, the maximum capacity of the places in a Petri net does not exceed a fixed constant K. In this paper, we do not give a specific value of K, which can be an arbitrarily large non-negative integer.
Assumption 2. 
All places in the system with uncontrollable transitions do not form a cycle.
Assumption 2 means that the T u c -induced subnet in the system is acyclic, which allows us to use the state equation to study the reachability of the uncontrollable subnet.
Assumption 3. 
The T u c -induced subnet is backward-conflict-free.
Assumption 3 means that every place has at most one input transition in the T u c -induced subnet. Then, Y m i n M , t is a singleton [25]. Thus, the BRG is considered to be a deterministic finite-state automaton.
We construct a closed-loop system under AE-attacks in Algorithm 1. Let H be the supervisor realized by using a Petri net. Review that a P -supervisor can capture the set of events that are currently enabled. Particularly, enabled unobservable events can be captured with self-loops at the current place in H. More precisely, a supervisor is able to disable some events of G. First, we construct G a by adding all possible attack behaviors to G with the compression operator C on L G . Specifically, G a is constructed by adding a parallel transition labeled by ω a T c , v a on G. Then, we construct the overall supervisor H a in the role of AE-attacks. Intuitively, H a is constructed by adding self-loops to all places with events in T c , v a , when the candidate event’s compression is not in the set of active events at the place.
Then, we obtain the closed-loop system under attacks G M by taking the parallel composition of H a and G a . G M simulates the system behavior in the case where AE-attacks are always presented for all vulnerable actuators. We define a new set of events Φ in the given Petri net, Φ = t T u c p P u , t , p F , where P u represents the set of unsafe places. The physical meaning of the set Φ is as follows: the set of basis markings is reachable by firing a sequence σ t , where t T c and σ T u c * . If the sequence σ contains transitions in Φ , then the markings containing unsafe places may not belong to the set of basis markings.
In Algorithm 2, we modify the method of calculating the BRG. Given a marking M and a transition t T c Φ , we define Φ M , t = σ T \ T c Φ * M σ M , M P r e · , t as the set of explanations of a transition t at a marking M. Correspondingly, the sets of Y M , t , m i n M , t and Y m i n M , t are modified to the sets of Y Φ M , t , m i n Φ M , t and Y m i n Φ M , t , respectively. The restriction of the incidence matrix to T u c \ Φ is denoted as  C u c Φ . Finally, we compute the corresponding basis markings by using the minimal e-vectors and the corresponding transitions in the set T c Φ from the initial marking. The basis attack model G B is constructed subsequently, which allows a more precise analysis of the attacker.
In the resulting basis attack model, states and arcs are drastically reduced as well as the analysis of system behavior becomes more efficient. Since uncontrollable events can always happen at any time, there is no need to display uncontrollable events in the generated basis attack model G B , and uncontrollable events are used as explanation vectors to fire a certain event.
In the basis attack model G B , only the events in T c are controllable, and the events in T c , v are the attacker’s behaviors. However, the events in T c , v are also controllable, except that the original control action is overwritten by the attacker. The observability of the events in T c , v inherits the observability of the corresponding events in T c .
Algorithm 1 Algorithm for the closed-loop system under attacks
Input: 
A Petri net G = N , M 0 and a supervisor H = P h , T , F h , M 0 , h , W h .
Output: 
A closed-loop system under attacks G M = P m , T a , F m , M 0 , m , W m .
  1:
Let G a = P , T a , F a , M 0 , W a ;
  2:
for all  p P , ν T a  do
  3:
    if  p , C ( ν ) F  then
  4:
        Let F a = F a p , C ( ν ) C ν , κ , κ C ν ;
  5:
        Let W a p , C ( ν ) = W p , C ( ν ) ;
  6:
        Let W a C ν , κ = W C ν , κ ;
  7:
    end if
  8:
end for
  9:
Let H a = P h , T a , F h , a , M 0 , h , W h , a ;
10:
for all  p P h , ν T a do
11:
    if  p , ν F h  then
12:
        Let F h , a = F h , a p , ν ν , κ ν , κ F h ;
13:
        Let W h , a p , ν = W p , ν ;
14:
        Let W h , a ν , κ = W ν , κ ;
15:
    else if  p , ν F h  then
16:
        Let F h , a = F h , a p , ν ν , p ;
17:
        Let W h , a p , ν = W p , ν ;
18:
        Let W h , a ν , p = W p , ν ;
19:
    end if
20:
end for
21:
Compute G M = G a H a ;
22:
Output G M = P m , T a , F m , M 0 , m , W m .
Algorithm 2 Construction of the basis attack model
Input: 
A closed-loop system under attacks G M = P m , T a , F m , M 0 , m , W m .
Output: 
A basis attack model G B = M , T , δ , M 0 .
  1:
Let M = Ø , M n e w = M 0 ;
  2:
Let T c be the set of controllable transtions;
  3:
while  M n e w Ø do
  4:
    Select a state M M n e w ;
  5:
    for all  t T c Φ  do
  6:
        Compute Y m i n Φ M , t ;
  7:
        for all  y Y m i n Φ M , t  do
  8:
           Let M ^ = M + C u c Φ · y + C · , t ;
  9:
           if  M ^ M M n e w  then
10:
               Let M n e w = M n e w M ^ ;
11:
           end if
12:
           Let δ M , t = M ^ ;
13:
        end for
14:
    end for
15:
    Let M = M M ;
16:
    Let M n e w = M n e w \ M ;
17:
end while
18:
Output G B = M , T , δ , M 0 .
Example 1. 
The plant G is shown in Figure 4 with T c = t 1 , t 4 , t 5 , t 7 , t 8 , t 10 , t 11 , T u c = t 0 , t 2 , t 3 , t 6 , t 9 , T c , v = t 8 , t 10 , T o = t 0 , t 1 , t 2 , t 3 , t 4 , t 5 , t 6 , t 7 , t 8 , t 10 , t 11 , T u o = t 9 . The unsafe place in the plant is p 11 , represented by a square graphic, then Φ = t 9 . The supervisor H is shown in Figure 5, which can control G. The supervisor disables transition t 8 in place p 8 , thus stopping the system from arriving at an unsafe place p 11 . Following Algorithm 1, we build the closed-loop system under attacks by G M = G a H a in Figure 6. Following Algorithm 2, the basis attack model G B is computed, starting from the initial marking M 0 .
The generated basis attack model G B is shown in Figure 7. There are seven states and eight arcs in Figure 7, while there are twelve states and twelve arcs in the reachability graph of the plant. As we can see, since the attacker enables vulnerable events, the system can reach the unsafe place p 11 through the event t 9 .

4. Detection of Actuator Enablement Attacks

4.1. Detecition Strategy

As mentioned above, under an AE attack, a plant may deviate from the supervisor-enforced specification and arrive at an unsafe place. In order to prevent the impact of such an attack, we design a model for attack detection. When the attack is detected, the system switches to “defense module”. This strategy restricts the plant to stop the system from reaching any place in a given set of unsafe places. While it is assumed that every place reached by S P / G is safe, not all places other than those reached by S P / G are unsafe. We use P u to represent the set of unsafe places.
Our techniques are based on those developed in [29] for “safe controllability” and [10] for “disable languages”. In particular, using the model built in the above section, we describe the attack detection issue to be a fault diagnosis one in which a fault event is an intrusion event on an actuator that is vulnerable to attacks. An intrusion detection module is designed to monitor the situation of the plant and inform the supervisor when an attack is diagnosed. When a message that the system has been attacked is received from the D A , the supervisor switches to the “defense module” in which the supervisor disables every controllable event. We point out that attack detection and safe controllability strategies are equally applicable to online implementations, as they rely only on diagnosers and supervisors.

4.2. AE-Safe Controllability

We review the AE-safe controllability in [23]. In particular, the set of unsafe places P u P is considered. The set of strings with the last event being the vulnerable controllable event is denoted as Ψ T c , v = γ L G : γ = γ ω , γ T * , ω T c , v a . The basis attack model G B generated by Algorithm 2 represents the system behaviors after AE-attacks. Let M u = M M p P u : M p > 0 be the set of unsafe states in G B . When s is a rigorous prefix of s, it is written as s < s . Given L T * , all subsequent strings starting with s in L are defined as L / s : = γ : s γ L . We define L G B : = s T * : δ M 0 , s is defined as the language generated by G B . We define P o a : T * T o D T c , v T o * . AE-safe controllability will hold if an attack is detected and successfully stop the plant from arriving at an unsafe state. In the following, we give a definition of AE-safe controllability.
Definition 3. 
The basis attack model G B = M , T , δ , M 0 is from Algorithm 2. Language L B = L G B satisfies AE-safe controllability on projection P o a , attacked events T c , v a and unsafe states M u , if s Ψ T c , v a γ L B / s δ M 0 , s γ M u Ø s < s γ , δ M 0 , s M u = Ø   γ 1 , γ 2 T * γ = γ 1 γ 2 μ L B P o a s γ 1 = P o a μ T c , v a μ T c γ 2 .
Briefly, we claim that the system satisfies AE-safe controllability if L B , P o a and M u are understood and Definition 3 holds. The definition of AE-safe controllability is illustrated in Figure 8. In the above definition, the state is first arrived via the string s, whose final event ω c , v a of that string s is the actuator event under attacks. The string γ is a continuation of s that first reaches the unsafe state. A system satisfies AE-safe controllability if for every pair of s and γ , where γ can be divided as γ = γ 1 γ 2 and where: (1) after s γ 1 , an attacked event can be diagnosed, and (2) there exists a controllable event in γ 2 . Review above that every event in T c is controllable, while the attacked events in it are uncontrollable. In other words, AE-safe controllability will hold if an attack is detected, then disable a controllable event and successfully stop the plant from arriving at an unsafe state; this controllability holds true if each attack cannot be missed. We should point out that the detection requirements after string s γ 1 is that an attack is detected at each vulnerable actuator (cf. T c , v a μ in Definition 3). The module D A will inform S P to disable all controllable actuator events after it is sure that an attack has been detected. The construction method of G B and the requirements to determine AE-safe controllability yield the results shown below.
Theorem 1. 
Considering AE-attacks and the “defense module”, the plant G does not arrive at an unsafe state if and only if it satisfies AE-safe controllability.
Proof of Theorem 1. 
By contradiction, suppose that the plant will still arrive at an unsafe state while satisfying AE-safe controllability. By Definition 3, if the system reaches an unsafe state, it will violate the safety controllability requirement, which contradicts the assumption.
By contradiction, supposing that the system does not satisfy the safety controllability requirement, but also does not reach the unsafe state, according to Definition 3, if the safety controllability is violated, the system reaches the unsafe state and suffers damage, which contradicts the assumption.    □

4.3. Test of AE-Safe Controllability Using Basis Diagnoser

To determine whether a system satisfies AE-safe controllability, we formulate an algorithm for constructing a diagnoser using the basis attack model. The diagnoser depends on the calculation of an automaton observer, which is generated by a parallel composition of a plant automaton and a label automaton, as mentioned in [26,30]. We propose algorithms to verify that the module D A is able to detect attacks before the system arrives at an unsafe state and the supervisor can disable controllable events to stop the system from arriving at P u . We first review the definition of first-entered certain states as described in [29]. The diagnoser and related terms are explained in [26].
Definition 4. 
The basis diagnoser is described as G D = M d , T o , F d , M 0 , d , which consists of a combination of the basis attack model G B and the label automaton A ξ . Three new sets are defined as follows: Q U = q q M d : q is uncertain , Q N = q q M d : q is normal , and Q Y = q q M d : q is certain . The set of first-entered certain states is FC = q q Q Y : q Q U Q N , ω T o F d q , ω = q .
First, we build the label basis attack model G ξ with Algorithm 3. It can be seen by the construction of G B that our aim is to determine the course of events in T c , v a , according to the set of observable events. Specifically, the events in T c , v a are all “fault” events to be detected, and they are considered to be the identical fault type. Thus, we want to build a basis diagnoser G D . In Algorithm 3, we construct a label basis attack model G ξ = M ξ , T ξ , δ ξ , M 0 , ξ by using the label automaton A ξ in Figure 9 and the attacked actuator events in T c , v a .
Algorithm 3 Building a label basis attack model
Input: 
Basis attack model G B = M , T , δ , M 0
Output: 
Label basis attack model G ξ = M ξ , T ξ , δ ξ , M 0 , ξ
  1:
Sign the initial marking M 0 as “N”;
  2:
for all  M c M do
  3:
    for all  t T  do
  4:
        if  δ M p , t = M c  then
  5:
           if  M p is labeled with “Uthen
  6:
               Sign M c as U ;
  7:
           else if  M p is labeled with Y or t T c , v  then
  8:
               Sign M c as Y ;
  9:
               for all  t T and t t  do
10:
                   if  δ M p , t = M c and t T c , v and M p is not labeled with Y  then
11:
                       Sign M c as U ;
12:
                   end if
13:
               end for
14:
           else
15:
               Sign M c as N ;
16:
           end if
17:
        end if
18:
    end for
19:
end for
20:
Output G ξ = M ξ , T ξ , δ ξ , M 0 , ξ .
Next, we introduce Algorithm 4, which is a diagnoser-based algorithm to verify AE-safe controllability. We start constructing a basis diagnoser G D = O b s G ξ , T a , u o , where O b s G ξ , T a , u o represents the observer of G ξ about the unobservable event set T a , u o with T a , u o = T u o D T c , v T u o . In Step 2, we examine each uncertain state to determine whether it contains an unsafe state, and if so, the diagnoser will fail to detect the occurrence of an attack before the system arrives at an unsafe state, thus violating AE-safe controllability. Then, we calculate the set FC and examine each state in FC to determine whether it contains an unsafe state in Step 6. If so, even though an attack is detected, the system has already arrived at an unsafe state; therefore the system violates AE-safe controllability. In Step 9, we consider a set of events T T and a state M M . The set of reachable states for T and M is R e a c h G B , M , T = M M : s T * δ M , s = M . Finally, the set of reachable states is found from FC by attacked actuator events or uncontrollable events. Then, the states in the set are examined at Step 10 to determine whether they contain unsafe states. If so, even if an attack is diagnosed at this time, it cannot stop the system from arriving at an unsafe state. Thus, it does not satisfy AE-safe controllability. In Algorithm 4, the projection of q on the corresponding state set of G B is denoted as q M : = M : l M , l q .
Algorithm 4 AE-safe controllability test using basis diagnoser
Inputs: 
  G ξ : Label basis attack model
  M u : set of unsafe states
  T c , v a : set of attacked actuator events
Output: 
AE-Safe Controllability t r u e , f a l s e
  1:
The basis diagnoser G D = O b s G ξ , T a , u o ;
  2:
if there is uncertain state q = M i 1 , ξ i 1 M i n , ξ i n in which there exists M i j M u  then
  3:
    AE-safe controllability=false;
  4:
else
  5:
    Compute FC according to Definition 4;
  6:
    if there is q = M i 1 , Y M i n , Y in which there exists M i j M u  then
  7:
        AE-safe controllability=false;
  8:
    else
  9:
        Compute M u c = q FC M x q M R e a c h G B , M x , T c , v a Φ ;
10:
        if  M u c M u Ø  then
11:
           AE-safe controllability=false;
12:
        else
13:
           AE-safe controllability=true.
14:
        end if
15:
    end if
16:
end if
Proposition 1. 
Consider G B = M , T , δ , M 0 from Algorithm 2. The basis diagnoser G D is constructed in Algorithm 4. Language L B does not satisfy AE-safe controllability for P o a , T c , v a , and M u if and only if any of these conditions are true:
(1) There exists q U = M i 1 , ξ i 1 , , M i n , ξ i n Q U such that j 1 , , n , M i j M u and ξ i j = Y ; (2) There exists q Y = M i 1 , Y , , M i n , Y FC such that j 1 , , n , M i j M u ; (3) There exists M x M u c such that M x M u , where M u c is defined in Algorithm 4.
Proof of Proposition 1. 
Suppose the plant L B is AE-safe controllable; there exists M i j , ξ i j with ξ i j = Y , M i j M u in the set of q U . It indicates that the system has detected an attack occurrence at this time, while the plant has reached the unsafe state, According to Definition 3, it is known that AE-safe controllability is violated, which is a contradiction. The remaining two conditions are the same as above.
By contradiction, suppose that there is no M i j , ξ i j with ξ i j = Y , M i j M u in the set of q U , and the plant L B violates AE-safe controllability. According to Definition 3, if the plant does not satisfy AE-safe controllability, L B has reached an unsafe state M i j M u after attacks, which is a contradiction. The remaining two conditions are the same as above.    □
Note that since the event ω a is observable, the basis diagnoser can detect an attack immediately when it occurs on the vulnerable actuator events in ω T c , v T o . In such a case, the system might still arrive at an unsafe state and violate AE-safe controllability via the attacked actuator events.
Example 2. 
Based on Example 1, the basis attack model G B is shown in Figure 7. Next, we verify that the system satisfies AE-safe controllability according to Algorithm 4. In the first step, we construct the label basis attack model G ξ on T c , v a = t 8 a , t 10 a in Figure 10. For convenience, assuming T a , u o = Ø , the basis diagnoser G D is the identical graph as G ξ . By checking states of the diagnoser in Figure 10, we can find that an attacked actuator event will be detected in M 5 , Y before the system arrives at the unsafe state M 6 . Finally, we can see that M u c = M 5 , M 6 contains the unsafe state M 6 at Step 10, which means that although an attack can be detected in advance by the diagnoser, the plant can still enter the unsafe state M 6 M u under the attack since event t 9 is uncontrollable, thus violating AE-safe controllability.

4.4. Test of AE-Safe Controllability Using Basis Verifier

This subsection verifies AE-safe controllability by using a basis verifier, which is another diagnostic method. The simple verifier-based approach was proposed and used in [31,32,33,34]. Compared to the diagnoser presented in the above section, the verifier requires lower complexity, but the verifier is not as suitable for online diagnosis as the diagnoser. Both methods are suitable for different scenarios, and both have their own advantages.
Algorithm 5 shows in detail how AE-safe controllability can be tested by a basis verifier. In the first step, the label basis attack model G ξ is constructed by using Algorithm 3. In the second step, the basis verifier G V is constructed by using the method of [31]. G V is obtained by computing G N and G F that denote the model of normal and faulty behavior, respectively. At a state M, the active event set of G V is denoted as Γ V M . In the process of constructing G N , the state space is represented by M N , and then the unobservable events are renamed using the renaming function R: T \ T c , v a T R , where R ω = ω , if ω T a , o and R ω = ω R , if ω T a , u o \ T c , v a . The observable event set is T a , u o = T u o D T c , v T u o . Therefore, the unobservable events of G N and G F are considered to be “private” events. In Step 4, all states in G V are judged for the presence of unsafe states, and if they exist, AE-safe controllability is violated. In Step 7, we present a new state set A that indicates the states reached by the remaining observable events, which may contain unsafe states, and then add a self-loop of uncontrollable events under A. After diagnosing the attack, the system violates AE-safe controllability if there is a path to reach the unsafe state only by the unobservable events. At Step 11, we compute the combined basis verifier G T = G V c d G F , whose state space is represented by M T . In Step 12, if there is an unsafe state in G T , the unsafe state was reached before the attack was detected.
Proposition 2. 
Let L B be the language generated by G B . Then, L B does not satisfy AE-safe controllability with respect to P o a : T * T a , o * , T c , v a and M u if and only if any of these conditions is true: (1) There exists M V = M N , N , M , Y M V such that M M u , where M N M N and M M ; (2) There exists M V c d , M , Y M T such that M V c d = A and M M u , where M V c d M V c d and M M .
Proof of Proposition 2. 
By contradiction, let L B be AE-safe controllable. Then, there exists M V = M N , N , M , Y M V , M M u . It means that the plant has detected an attack occurrence at this time, while the plant has reached the unsafe state. According to Definition 3, it is known that the AE-safe controllability is violated, which is a contradiction. The remaining condition is the same as above.
By contradiction, suppose that there is no M V = M N , N , M , Y M V , M M u and L B that violates AE-safe controllability. According to Definition 3, if the plant does not satisfy AE-safe controllability, it indicates that the plant has reached an unsafe state M M u after an attack, which is a contradiction. The remaining condition is the same as above.    □
Algorithm 5 AE-safe controllability test using basis verifier
Inputs: 
  G B = M , T , δ , M 0 : basis attack model
  M u : set of unsafe states
  T c , v a : set of attacked actuator events
Output: 
AE-Safe Controllability  t r u e , f a l s e
  1:
Build G ξ according to Algorithm 3;
  2:
Build basis verifier G V = M V , T R T , F V , M 0 , V assuming T c , v a be the set of fault events according to Algorithm 1 in [31];
  3:
Let Γ V M = t T R T M M V , F V M , t = M ;
  4:
if there exists M N , N , M , Y of G V such that M M u  then
  5:
    Safe Controllability=false;
  6:
else
  7:
    Build G V c d = M V c d , T R T , F V c d , M 0 , V , where
  8:
       M V c d = M V A ;
  9:
       F V c d M V , τ = F V M V , τ , if τ Γ V M V ;
10:
       F V c d M V , τ = A , if τ T a , o τ Γ V M V ;
11:
       F V c d A , τ = A for all τ Φ T c , v a ;
12:
    Build G T = G V c d G F , where G F is defined in Algorithm 1 in [31];
13:
    if there exists M V c d , M , ξ in G T such that M V c d = A and M M u  then
14:
        Safe Controllability=false;
15:
    else
16:
        Safe Controllability=true.
17:
    end if
18:
end if
Example 3. 
Again reviewing the system in Example 1, the basis attack model G B is shown in Figure 7 in which the controllable events observable events and vulnerable events are T c = t 1 , t 4 , t 5 , t 7 , t 8 , t 10 , t 11 , T o = t 0 , t 1 , t 2 , t 3 , t 4 , t 5 , t 6 , t 7 , t 8 , t 10 , t 11 , T c , v = t 8 , t 10 , respectively. The model G N , which represents the normal behavior of the system, is shown in Figure 11a. The model G F , which represents the fault/attacked behavior, is depicted in Figure 11b, and the basis verifier G V is displayed in Figure 12. By Step 8 of Algorithm 5, a new state A needs to be added to G V , which is added into the basis verifier under attacks G V c d . Each state in G V is linked to state A by observable events except the self-loop of uncontrollable events under state A. The basis verifier under attacks G V c d is shown in Figure 13. After that, G T is obtained by calculating G V c d G F , as shown in Figure 14. According to the judgment condition of Step 13 in Algorithm 5, it is known that the system violates AE-safe controllability, since the state A , M 6 , Y in G T consists of two parts, state A and M 6 M u .

5. Computational Efficiency Analysis and Experiments

First, we consider the complexity of constructing a system network model by using a Petri net. In this case, the relationship between the construction of the Petri net model and the size of the actual system is linear. For the complexity of constructing a BRG, if all the transitions in the system are controllable, then the basis marking set and the reachable marking set are the same, i.e., M = R G . Therefore, in the worst case, constructing a BRG has the same complexity as constructing a reachable graph. In this case, the complexity is exponential with respect to the number of places and the initial marking. Next, we consider the complexity of building the basis attack model. Since the basis attack model is built based on BRG, the complexity of constructing the basis attack model is identical to the BRG. Finally, in the detection of attacks using the basis diagnoser, the complexity of building the basis diagnoser is exponential with respect to the number of states in the basis attack model, and the basis verifier requires polynomial time with respect to the state space of the BRG.
For the attack detection method under the basis attack model mentioned above, we give some numerical examples to validate the construction method and study the efficiency of the model by experiments comparing the number of states of the basis attack model with the number of states of the reachable graph. Finally, we determine whether AE-security controllability is satisfied. From the experiments, the number of states under a basis attack model is significantly reduced. The experiments are simulated on a laptop computer with Core-i5 2.40 GHz/2.50 GHz CPU using Petri Net Basis Reachability Space Generator [35]. The experimental comparison is shown in Table 1.

6. Example

A cargo transportation system under an AE attack is considered, which is represented by a Petri net, as depicted in Figure 15. The system is responsible for warehousing, processing, handling, packing and discharging cargoes from the factory. The places indicate the location in the factory, the transitions indicate intelligent automatic processing machines, and the arrow indicates the conveyor belt. The attacker modifies the actuator information to force the machine to start, whose ultimate goal is to steal secret information when the system reaches the unsafe state p 33 . The first batch of cargo enters the factory at p 0 , the quality check is performed at t 2 , the unqualified products are sent to p 4 , and the qualified products continue to be sent to p 3 for the next check. The processing route is selected at p 6 according to the number of cargoes. If the number is small, then t 9 is enabled, otherwise t 8 is enabled. The system arrives at p 12 for classification, t 13 is normally enabled for processing, t 12 is the alternate processing route, and it finally arrives at p 19 . When it arrives at p 22 , t 23 is enabled to select the method of cargo transportation and print the transportation order. Expedited transportation arrives at p 23 , and ordinary transportation arrives at p 24 . When the system arrives at p 33 , the factory scans each cargo and registers the information in the cloud platform. In the whole system, p 33 is the most critical step and the most vulnerable to intrusion, and the attacker wants to steal the secret information at p 33 . The goods are finally released at p 34 . After a shipment is completed, it reverts to p 0 for the next shipment.
The set of controllable events of the plant is T c = t 1 , t 2 , t 5 , t 7 , t 8 , t 9 , t 17 , t 18 , t 19 , t 22 , t 23 , t 28 , t 33 , t 34 , the set of remaining events T u c = T \ T c is uncontrollable, and the set of vulnerable events is T c , v = t 17 , t 28 . The set of observable events is T o = t 0 , t 1 , t 2 , t 4 , t 5 , t 6 , t 7 , t 8 , t 9 , t 10 , t 11 , t 12 , t 13 , t 14 , t 15 , t 16 , t 17 , t 18 , t 19 , t 20 , t 21 , t 22 , t 23 , t 24 , t 25 , t 26 , t 27 , t 28 . The set of unsafe places is P u = p 33 . The set Φ is Φ = t 32 . Let system G be controlled by the supervisor S P , who always disables the set of events T c , v in order to prevent damage to the system.
We consider that K L G is an observable and controllable behavior which is realized by S P . The realization H of the supervisor is depicted in Figure 16. According to Algorithm 1, we first construct G a to represent the change of system state after being attacked. Next, we build H a to represent the supervisor after being attacked. Then, we construct the closed-loop system G M under attacks by using parallel composition, as shown in Figure 17. Finally, according to the algorithm presented above, the basis attack model is generated, as depicted in Figure 18. We can see that the set of unsafe states is M u = M 20 , M 21 . After the attacked event t 28 , the plant state changes from M 17 to M 19 , and since the events in the set Φ are uncontrollable events, the system state continues to run uncontrollably from M 19 to the unsafe state M 21 .
According to the safe controllability condition proposed in this article, we use Algorithms 3 and 4 to determine whether the plant satisfies AE-safe controllability. Part of the basis diagnoser G D is shown in Figure 19. It can be seen that the plant can still arrive at state M 19 . At this point, the system detects that an attack has occurred. Next, the system will continue to reach the state M 21 via the event t 32 T u c . There is no controllable event to interrupt the process between the attacked event and the unsafe state, thus violating AE-safe controllability.
The detection of AE-attacks by constructing a basis diagnoser largely improves the efficiency. By using the traditional automaton approach in this example, we generate 123 states and 234 arc relations, the state compression rate is 81.3%, and the arc relation compression rate is 85.5%. Since the reachability graph is equivalent to a finite state automaton, the above comparison is based on the reachability graph.

7. Conclusions

In this article, we studied the attack detection of AE-attacks in a supervisory control system. In a supervisory control system, actuator signals are vulnerable to manipulation by an attacker. An attacker will enable events that have been disabled by a supervisor in order to make the system reach unsafe states. We use the technique under Petri nets to develop attack detection methods to protect the system by disabling all controllable events after detecting an attack. First, we introduce a general framework for attack detection that models the plant as a Petri net to describe the system behaviors. Second, we simplify the attack model using basis markings to construct a basis attack model to analyze the system behaviors after an attack occurs. The basis attack model satisfies the properties of a closed-loop control system and reduces the number of states in the plant. Third, to prevent the attack from causing damage to the system, we also build the basis diagnoser to judge AE-safe controllability. After an attack is detected, the supervisor disables all controllable events to prevent the system from reaching an unsafe state. If successful, the system satisfies AE-safe controllability; otherwise, it does not. Compared with the traditional method, our method improves the detection efficiency and alleviates the state explosion problem. Finally, we also provide an offline solution to the attack detection problem. In this article, we consider AE-attacks to explain our framework and results. In fact, our method is general and available for other types of attacks. In future work, we will extend the approach to unbounded nets and relax the assumptions and we will also consider new types of attacks similar to the spread of viruses [36,37,38] in the framework of DESs.

Author Contributions

Conceptualization, Z.Y.; methodology, Z.Y. and X.C.; formal analysis, X.C.; investigation, X.D.; writing—original draft preparation, X.D.; writing—review and editing, X.C., X.L. and L.Z.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants 62273272 and 61873277, the Key Research and Development Program of Shaanxi Province under Grant 2023-YBGY-243, the Natural Science Foundation of Shannxi Province under Grant 2022JQ-606, the Research Plan of Department of Education of Shaanxi Province under Grant 21JK0752, and the Youth Innovation Team of Shaanxi Universities.

Data Availability Statement

Not applicable.

Acknowledgments

The authors sincerely appreciate the editor and anonymous referees for their careful reading and helpful comments to improve this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, L.; Zhu, Y.; Su, R. Synthesis of covert actuator attackers for free. Discret. Event Dyn. Syst. 2020, 30, 561–577. [Google Scholar] [CrossRef]
  2. Yu, Z.; Gao, H.; Wang, D.; Alnuaim, A.A.; Firdausi, M.; Mostafa, A.M. SEI2RS malware propagation model considering two infection rates in cyber–physical systems. Phys. A Stat. Mech. Appl. 2022, 597, 127207. [Google Scholar] [CrossRef]
  3. Meira-Góess, R.; Kang, E.; Kwong, R.H.; Lafortune, S. Synthesis of sensor deception attacks at the supervisory layer of cyber–physical systems. Automatica 2020, 121, 109172. [Google Scholar] [CrossRef]
  4. Meira-Góess, R.; Kang, E.; Kwong, R.H.; Lafortune, S. Stealthy deception attacks for cyber–physical systems. In Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, VIC, Australia, 12–15 December 2017; pp. 4224–4230. [Google Scholar]
  5. Zhang, D.; Wang, Q.G.; Feng, G.; Shi, Y.; Vasilakos, A.V. A survey on attack detection, estimation and control of industrial cyber–physical systems. ISA Trans. 2021, 116, 1–16. [Google Scholar] [CrossRef] [PubMed]
  6. Yu, Z.; Sohail, A.; Jamil, M.; Beg, O.; Tavares, J.M.R. Hybrid algorithm for the classification of fractal designs and images. Fractals, 2022; accepted. [Google Scholar] [CrossRef]
  7. Hou, Y.; Shen, Y.; Li, Q.; Ji, Y.; Li, W. Modeling and optimal supervisory control of networked discrete-event systems and their application in traffic management. Mathematics 2023, 11, 3. [Google Scholar] [CrossRef]
  8. Yu, Z.; Wang, H.; Wang, D.; Li, Z.; Song, H. CGFuzzer: A fuzzing approach based on coverage-guided generative adversarial networks for industrial IoT protocols. IEEE Internet Things J. 2022, 9, 21607–21619. [Google Scholar] [CrossRef]
  9. Cong, X.; Fanti, M.P.; Mangini, A.M.; Li, Z. Critical observability of discrete-event systems in a Petri net framework. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 2789–2799. [Google Scholar] [CrossRef]
  10. Thorsley, D.; Teneketzis, D. Intrusion detection in controlled discrete event systems. In Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, CA, USA, 13–15 December 2006; pp. 6047–6054. [Google Scholar]
  11. Wakaiki, M.; Tabuada, P.; Hespanha, J.P. Supervisory control of discrete-event systems under attacks. Dyn. Games Appl. 2019, 9, 965–983. [Google Scholar] [CrossRef]
  12. Wang, Y.; Pajic, M. Supervisory control of discrete event systems in the presence of sensor and actuator attacks. In Proceedings of the 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, France, 11–13 December 2019; pp. 5350–5355. [Google Scholar]
  13. You, D.; Wang, S.; Zhou, M.; Seatzu, C. Supervisory control of Petri nets in the presence of replacement attacks. IEEE Trans. Autom. Control 2021, 67, 1466–1473. [Google Scholar] [CrossRef]
  14. You, D.; Wang, S.; Zhou, M.; Seatzu, C. Supervisor synthesis to thwart cyberattack with bounded sensor reading alterations. Automatica 2018, 94, 35–44. [Google Scholar]
  15. Agarwal, M. Rogue twin attack detection: A discrete event system paradigm approach. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 1813–1818. [Google Scholar]
  16. Fritz, R.; Zhang, P. Modeling and detection of cyberattacks on discrete event systems. IFAC-PapersOnLine 2018, 51, 285–290. [Google Scholar] [CrossRef]
  17. Wang, Y.; Li, Y.; Yu, Z.; Wu, N.; Li, Z. Supervisory control of discrete-event systems under external attacks. Inf. Sci. 2021, 562, 398–413. [Google Scholar] [CrossRef]
  18. Zhang, Q.; Seatzu, C.; Li, Z.; Giua, A. Stealthy sensor attacks for plants modeled by labeled Petri nets. IFAC-PapersOnLine 2020, 53, 14–20. [Google Scholar] [CrossRef]
  19. Ma, Z.; Cai, K. On Resilient Supervisory Control Against Indefinite Actuator Attacks in Discrete-Event Systems. IEEE Control Syst. Lett. 2022, 6, 2942–2947. [Google Scholar] [CrossRef]
  20. Yao, J.; Yin, X.; Li, S. On attack mitigation in supervisory control systems: A tolerant control approach. In Proceedings of the 2020 59th IEEE Conference on Decision and Control (CDC), Jeju, Republic of Korea, 14–18 December 2020; pp. 4504–4510. [Google Scholar]
  21. Rashidinejad, A.; Wetzels, B.; Reniers, M.; Lin, L.; Zhu, Y.; Su, R. Supervisory control of discrete-event systems under attacks: An overview and outlook. In Proceedings of the 2019 18th European Control Conference (ECC), Naples, Italy, 25–28 June 2019; pp. 1732–1739. [Google Scholar]
  22. Zheng, S.; Shu, S.; Lin, F. Modeling and control of discrete event systems under joint sensor-actuator cyberattacks. In Proceedings of the 2021 6th International Conference on Automation, Control and Robotics Engineering (CACRE), Dalian, China, 15–17 July 2021; pp. 216–220. [Google Scholar]
  23. Carvalho, L.K.; Wu, Y.C.; Kwong, R.; Lafortune, S. Detection and mitigation of classes of attacks in supervisory control systems. Automatica 2018, 97, 121–133. [Google Scholar] [CrossRef]
  24. Cong, X.; Fanti, M.P.; Mangini, A.M.; Li, Z. Decentralized Diagnosis by Petri Nets and Integer Linear Programming. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1689–1700. [Google Scholar] [CrossRef]
  25. Ma, Z.; Tong, Y.; Li, Z.; Giua, A. Basis marking representation of Petri net reachability spaces and its application to the reachability problem. IEEE Trans. Autom. Control 2017, 62, 1078–1093. [Google Scholar] [CrossRef]
  26. Cassandras, C.G.; Lafortune, S. Introduction to Discrete Event Systems, 3rd ed.; Springer: Cham, Switzerland, 2021. [Google Scholar]
  27. Cabasino, M.P.; Giua, A.; Pocci, M.; Seatzu, C. Discrete event diagnosis using labeled Petri nets. An application to manufacturing systems. Control Eng. Pract. 2011, 19, 989–1001. [Google Scholar] [CrossRef]
  28. Wonham, W.M.; Cai, K. Supervisory Control of Discrete-Event Systems, 1st ed.; Springer: Cham, Switzerland, 2019. [Google Scholar]
  29. Paoli, A.; Sartini, M.; Lafortune, S. Active fault tolerant control of discrete event systems using online diagnostics. Automatica 2011, 47, 639–649. [Google Scholar] [CrossRef]
  30. Sampath, M.; Sengupta, R.; Lafortune, S.; Sinnamohideen, K.; Teneketzis, D. Diagnosability of discrete-event systems. IEEE Trans. Autom. Control 1995, 40, 1555–1575. [Google Scholar] [CrossRef] [Green Version]
  31. Moreira, M.V.; Jesus, T.C.; Basilio, J.C. Polynomial time verification of decentralized diagnosability of discrete event systems. IEEE Trans. Autom. Control 2011, 56, 1679–1684. [Google Scholar] [CrossRef]
  32. Yoo, T.S.; Lafortune, S. Polynomial-time verification of diagnosability of partially observed discrete-event systems. IEEE Trans. Autom. Control 2002, 47, 1491–1495. [Google Scholar]
  33. Jiang, S.; Huang, Z.; Chandra, V.; Kumar, R. A polynomial algorithm for testing diagnosability of discrete-event systems. IEEE Trans. Autom. Control 2001, 46, 1318–1321. [Google Scholar] [CrossRef]
  34. Cong, X.; Fanti, M.P.; Mangini, A.M.; Li, Z. Critical observability of labeled time Petri net systems. IEEE Trans. Automat. Sci. Eng. 2022; Early Access. [Google Scholar] [CrossRef]
  35. Zou, M.; Tong, Y.; Ma, Z. PNBA: A software for marking estimation and reconfiguration in Petri nets using basis marking analysis. IFAC-PapersOnLine 2022, 55, 180–187. [Google Scholar] [CrossRef]
  36. Yu, Z.; Sohail, A.; Arif, R.; Nutini, A.; Nofal, T.A.; Tunc, S. Modeling the crossover behavior of the bacterial infection with the COVID-19 epidemics. Results Phys. 2022, 39, 105774. [Google Scholar] [CrossRef]
  37. Yu, Z.; Sohail, A.; Nofal, T.A.; Tavares, J.M.R. Explainability of neural network clustering in interpreting the COVID-19 emergency data. Fractals 2022, 30, 2240122. [Google Scholar] [CrossRef]
  38. Yu, Z.; Ellahi, R.; Nutini, A.; Sohail, A.; Sait, S.M. Modeling and simulations of COVID-19 molecular mechanism induced by cytokines storm during SARS-CoV2 infection. J. Mol. Liquids 2020, 327, 114863. [Google Scholar] [CrossRef]
Figure 1. The closed-loop control system architecture.
Figure 1. The closed-loop control system architecture.
Mathematics 11 00943 g001
Figure 2. The control system architecture.
Figure 2. The control system architecture.
Mathematics 11 00943 g002
Figure 3. The flowchart of AE-attack detection.
Figure 3. The flowchart of AE-attack detection.
Mathematics 11 00943 g003
Figure 4. The plant G.
Figure 4. The plant G.
Mathematics 11 00943 g004
Figure 5. The supervisor H.
Figure 5. The supervisor H.
Mathematics 11 00943 g005
Figure 6. G M : the closed-loop system under attacks.
Figure 6. G M : the closed-loop system under attacks.
Mathematics 11 00943 g006
Figure 7. G B : the basis attack model.
Figure 7. G B : the basis attack model.
Mathematics 11 00943 g007
Figure 8. Graphic representation of AE-safe controllability.
Figure 8. Graphic representation of AE-safe controllability.
Mathematics 11 00943 g008
Figure 9. Label automaton A ξ .
Figure 9. Label automaton A ξ .
Mathematics 11 00943 g009
Figure 10. Label basis attack model G ξ .
Figure 10. Label basis attack model G ξ .
Mathematics 11 00943 g010
Figure 11. (a) The system model under normal behavior G N and (b) the system model under attacked behavior G F .
Figure 11. (a) The system model under normal behavior G N and (b) the system model under attacked behavior G F .
Mathematics 11 00943 g011
Figure 12. The basis verifier G V .
Figure 12. The basis verifier G V .
Mathematics 11 00943 g012
Figure 13. The basis verifier under attacks G V c d .
Figure 13. The basis verifier under attacks G V c d .
Mathematics 11 00943 g013
Figure 14. The combined basis verifier G T .
Figure 14. The combined basis verifier G T .
Mathematics 11 00943 g014
Figure 15. Petri net G.
Figure 15. Petri net G.
Mathematics 11 00943 g015
Figure 16. Supervisor H.
Figure 16. Supervisor H.
Mathematics 11 00943 g016
Figure 17. The closed-loop system under attacks G M .
Figure 17. The closed-loop system under attacks G M .
Mathematics 11 00943 g017
Figure 18. The basis attack model G B .
Figure 18. The basis attack model G B .
Mathematics 11 00943 g018
Figure 19. A part of the basis diagnoser G D .
Figure 19. A part of the basis diagnoser G D .
Mathematics 11 00943 g019
Table 1. Experimental comparison of the basis attack model and the traditional attack model [23].
Table 1. Experimental comparison of the basis attack model and the traditional attack model [23].
Experimental IndexPercentage of Controllable TransitionsNumber of Basis MarkingsNumber of Reachable MarkingsMarking Compression RateAE-Safe Controllability
1100%16160%True
297%578432%False
381%587219%True
462%427241%True
560%398151%False
651%3311270%True
743%2312481%False
835%2311279%False
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Z.; Duan, X.; Cong, X.; Li, X.; Zheng, L. Detection of Actuator Enablement Attacks by Petri Nets in Supervisory Control Systems. Mathematics 2023, 11, 943. https://doi.org/10.3390/math11040943

AMA Style

Yu Z, Duan X, Cong X, Li X, Zheng L. Detection of Actuator Enablement Attacks by Petri Nets in Supervisory Control Systems. Mathematics. 2023; 11(4):943. https://doi.org/10.3390/math11040943

Chicago/Turabian Style

Yu, Zhenhua, Xudong Duan, Xuya Cong, Xiangning Li, and Li Zheng. 2023. "Detection of Actuator Enablement Attacks by Petri Nets in Supervisory Control Systems" Mathematics 11, no. 4: 943. https://doi.org/10.3390/math11040943

APA Style

Yu, Z., Duan, X., Cong, X., Li, X., & Zheng, L. (2023). Detection of Actuator Enablement Attacks by Petri Nets in Supervisory Control Systems. Mathematics, 11(4), 943. https://doi.org/10.3390/math11040943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop