[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Exact analysis of summary statistics for continuous-time discrete-state Markov processes on networks using graph-automorphism lumping

Abstract

We propose a unified framework to represent a wide range of continuous-time discrete-state Markov processes on networks, and show how many network dynamics models in the literature can be represented in this unified framework. We show how a particular sub-set of these models, referred to here as single-vertex-transition (SVT) processes, lead to the analysis of quasi-birth-and-death (QBD) processes in the theory of continuous-time Markov chains. We illustrate how to analyse a number of summary statistics for these processes, such as absorption probabilities and first-passage times. We extend the graph-automorphism lumping approach [Kiss, Miller, Simon, Mathematics of Epidemics on Networks, 2017; Simon, Taylor, Kiss, J. Math. Bio. 62(4), 2011], by providing a matrix-oriented representation of this technique, and show how it can be applied to a very wide range of dynamical processes on networks. This approach can be used not only to solve the master equation of the system, but also to analyse the summary statistics of interest. We also show the interplay between the graph-automorphism lumping approach and the QBD structures when dealing with SVT processes. Finally, we illustrate our theoretical results with examples from the areas of opinion dynamics and mathematical epidemiology.

Introduction

Dynamical processes on networks are one of the main topics in network science (Barrat et al. 2008; Castellano et al. 2009; Newman 2003; 2010; Porter and Gleeson 2016). Numerous applications have been modelled as dynamical processes on networks, including epidemics (Kiss et al. 2017; Pastor-Satorras et al. 2015), magnetism (Glauber 1963), opinion dynamics (Galam 2002; Sood and Redner 2005; Sznajd-Weron and Sznajd 2000), diffusion of innovations (Bass 1969; Mellor et al. 2015; Melnik et al. 2013; Watts 2002), rumour spread (Daley and Kendall 1964; Goldenberg et al. 2001; Kempe et al. 2003), meme popularity (Gleeson et al. 2014), cultural polarisation (Axelrod 1997; Castellano et al. 2000), racial segregation (Schelling 1969; 1971), stock market trading (Kirman 1993), cascading failures (Gleeson et al. 2012; Haldane and May 2011; Motter and Lai 2002) and language evolution (Baronchelli et al. 2006; Bonabeau et al. 1995; Castelló et al. 2006). The mathematical analysis of such models has highlighted the rich yet subtle dependence of dynamical phenomena on network topology (Newman 2010; Porter and Gleeson 2016; Durrett 2007). However, the standard mathematical approach is via mean-field theories (Kiss et al. 2017), where dynamical correlations, clustering and modularity are ignored (Gleeson et al. 2012). While it is possible to derive high-accuracy mean-field approximations (Gleeson 2011) and some approximation approaches are amenable to rigorous analysis (Li et al. 2012; Van Mieghem 2011), there is still no theory that quantifies the accuracy of mean-field approximations or characterises the types of networks or dynamical processes where approximations work well.

In contrast to the use of approximate mean-field theories or simulation studies (Stoll et al. 2012), recent approaches in mathematical epidemiology have focused on the exact analysis of infection spread dynamics occurring on small networks, for example to quantify the importance of nodes, in terms of outbreak size, vaccination and early infection in SIR epidemics (Holme 2017), and to compute SIS extinction times using computational algebra for all sufficiently small graphs (Holme and Tupikina 2018). By focusing on small networks, it is possible to include heterogeneous rates of infection and recovery in the context of particular applications, such as the spread of hospital-acquired infections in intensive care units (López-García 2016), and to analyse these systems in terms of a number of performance measures. Thus, the aim is usually to compute summary statistics related to the dynamical process (Economou et al. 2015), instead of focusing on analysing the complete transient dynamics of the process, which are usually more complex to study (Keeling and Ross 2009).

Loosely speaking, a dynamical process on a network describes how the state of each node or vertex in the network changes in time. In this paper, we focus on dynamical processes that are described in terms of continuous-time Markov chains (CTMCs), although other types of dynamical processes are possible (for example where the state-space is continuous (Deffuant et al. 2000; Fortunato 2004; Hegselmann and Krause 2002), and/or the dynamics deterministic (Nakao and Mikhailov 2010; Rodrigues et al. 2016; Ward and Grindrod 2014) or non-Markovian (Kiss et al. 2015; Castro et al. 2018)). Thus for brevity, we will refer here to dynamical processes on networks described by CTMCs as simply network dynamics. It is well-known that the state-space of network dynamics grows exponentially with the number of nodes, so can be extremely large for even fairly small networks. Furthermore, network dynamics are determined by the infinitesimal generator matrix, which contains the rates at which the process moves between pairs of states. Given the large size of this matrix, even constructing it can prove challenging (Van Mieghem et al. 2009). In the area of mathematical epidemiology it has been recently shown how many network dynamical processes can be represented as quasi-birth-and-death processes (QBDs) (Simon et al. 2011; López-García 2016; Economou et al. 2015) by appropriately ordering the states within the state-space of the corresponding CTMC, so that its infinitesimal generator matrix is significantly sparse and tridiagonal-by-blocks. QBDs exploit the tri-diagonal structure of transitions occurring across groups or levels of states, and summary statistics in these systems can be analysed by means of computationally efficient algorithms (Latouche and Ramaswami 1999). While this approach is standard in areas such as queuing theory (Bean et al. 1997; Gómez-Corral and López-García 2014) and population dynamics (Gómez-Corral and López-García 2015), there is potential for it to be adopted more widely in applications of dynamics on networks in other areas of network science.

One way to reduce the size of the state-space is via lumping (Kemeny and Snell 1960), where several states in the state-space are combined together into one lumped state. Under certain conditions, quantities of interest about the unlumped system, such as its stationary distribution or transient probabilities, can be determined from the lumped system (Buchholz, P 1994). Efficient algorithms have been developed that can construct the optimal lumping quotient in \(\mathcal {O}(m\log (n))\) time, where n is the number of states and m is the number of transitions (Valmari and Franceschinis 2010; Derisavi et al. 2003). However, for network dynamics the number of states is exponential in the number of vertices N, and the number of transitions is typically of order N times the number of states. It has also been recently shown that network symmetries can be used to lump network dynamics (Kiss et al. 2017; Simon et al. 2011). For studies of small networks, such as those described above, lumping state-space on the basis of even relatively few symmetries can dramatically increase the size of the network that can be studied.

The main contribution of this paper is to illustrate how lumping based on network symmetries can be expressed in a matrix-oriented fashion for a broad class of network dynamics. This includes a large sub-class that give rise to QBD structures for which there are efficient algorithmic and matrix-oriented methods for the exact analysis of summary statistics. In the “CTMC dynamics on networks” section we present a unified CTMC framework that captures a wide range of network dynamics, the typical summary statistics of interest and how network symmetries can be used to lump the state-space using a matrix-oriented approach. In the “Single-vertex transition processes” section we focus on a particular sub-class of models that we call single-vertex transition processes that result in QBDs, and illustrate how the summary statistics described in the “CTMC dynamics on networks” section can be computed efficiently both for the full and lumped state-spaces. In the “Applications to classes of model” section, we show how our theoretical approach applies to various types of models using specific examples from opinion dynamics and mathematical epidemiology.

CTMC dynamics on networks

We start by introducing the general set-up of the type of CTMC network dynamics that we consider. This sets out our notation and crucially the structure of state-space. Model specific transition rates are discussed in detail in “Applications to classes of model” section.

Let us consider a network \({\mathcal {G}}=({\mathcal {V}},{\mathcal {E}})\) where \({\mathcal {V}}=\{1,\dots,N\}\) is the set of vertices and \({\cal E}\subseteq {\mathcal {V}}\times {\mathcal {V}}\) is the set of edges of the network. In this paper we focus on simple networks, i.e. unweighted, undirected networks with no self- or multi-edges. We consider that, over time, vertices can take a number of different vertex-states from the set \({\mathcal {W}}=\{1,\dots,M\}\), and that vertices change their vertex-state according to a continuous-time Markov chain (CTMC) \({\mathcal {X}}=\{\textbf {X}(t):\ t\geq 0\}\). The state of \({\mathcal {X}}\) at time t≥0, X(t), is a random vector \(\textbf {X}(t)=\left (X_{1}(t),\dots,X_{N}(t)\right)\) where \(X_{i}(t)\in \mathcal {W}\) represents the vertex-state of vertex i at time t≥0, for 1≤iN. Since each vertex takes vertex-states in \({\mathcal {W}}\), it is clear that the space of states of the CTMC is given by \({\mathcal {S}}={\cal W}^{\mathcal {V}}=\{\left (w_{1},\dots,w_{N}\right):\ w_{j}\in {\mathcal {W}},\ 1\leq j\leq N\}\), with cardinality \(\#{\mathcal {S}}=M^{N}\), so that it can be written as \({\cal S}=\{S_{1},S_{2},\dots,S_{M^{N}}\}\), with any arbitrary order of states. That is, each state \(S\in {\mathcal {S}}\) of the process \({\mathcal {X}}\) is just a permutation with repetition of N vertex-states.

For a given state \(S\in {\mathcal {S}}\), we denote by \(S(v)\in {\mathcal {W}}\) the vertex-state of vertex v in state S, for any 1≤vN. Also, given two states \(S_{i},S_{j}\in {\mathcal {S}}\), the sub-set \(U(S_{i},S_{j})\subset {\mathcal {V}}\) of so-called transition vertices contains those vertices uU(Si,Sj) such that Si(u)≠Sj(u) and Si(v)=Sj(v) for every \(v\in {\mathcal {V}}\setminus U(S_{i},S_{j})\) (that is, those vertices that differ in their vertex-state between states Si and Sj). For the (one-jump) transition between states \(S_{i}\rightarrow S_{j}\), we denote the transition rate by \(q_{S_{i}\rightarrow S_{j}}\geq 0\). We will discuss how the transition rates \(q_{S_{i}\rightarrow S_{j}}\) can be determined for specific types of model in “Applications to classes of model” section. For any given state \(S_{i}\in {\mathcal {S}}\), we denote by \(A(S_{i})\subset {\mathcal {S}}\) the set of states directly accessible (that is, in one jump of the process) from Si; thus \(A(S_{i})=\{S_{j}\in {\mathcal {S}}:\ q_{S_{i}\rightarrow S_{j}}>0\}\).

The dynamics of the CTMC \({\mathcal {X}}\) can be then analysed in terms of its transition probabilities \(p_{S_{j},S_{i}}(t)=\mathbb {P}(\textbf {X}(t)=S_{i}\ |\ \textbf {X}(0)=S_{j})\), \(\forall S_{i},S_{j}\in {\mathcal {S}}\) and t≥0. These probabilities, which can be stored in a matrix \(\textbf {P}(t)=(p_{S_{j},S_{i}}(t))_{i\in \{1,\dots,M^{N}\},\ j\in \{1,\dots,M^{N}\}}\), can be computed by solving the master equation

$$\begin{array}{@{}rcl@{}} \frac{d\textbf{P}(t)}{dt} &=& \textbf{P}(t)\textbf{Q}, \end{array} $$
(1)

where Q is the infinitesimal generator of \({\mathcal {X}}\). In particular, the matrix Q contains the transition rates between states in an ordered fashion, as follows

$$\begin{array}{@{}rcl@{}} \textbf{Q} &=& (q_{S_{i}\rightarrow S_{j}})_{i\in\{1,\dots,M^{N}\},j\in\{1,\dots,M^{N}\}}, \end{array} $$
(2)

where elements in the diagonal of Q are just the negative sum of elements in each row.

For an arbitrary \(m\in \mathcal {W}\), say here \(m\in \mathcal {W}\) without any loss of generality, one can organise the space of states \({\mathcal {S}}\) into groups or levels,

$$\begin{array}{@{}rcl@{}} {\mathcal{S}} &=& \bigcup\limits_{i=0}^{N} {\mathcal{S}}^{(i)},\quad {\mathcal{S}}^{(i)}=\{S\in{\mathcal{S}}:\ \#\{u\in{\mathcal{V}}:\ S(u)=M\}=i\}. \end{array} $$

That is, each level \({\mathcal {S}}^{(i)}\) is formed by all the states in \({\mathcal {S}}\) that contain exactly i vertices with vertex-state M. Each level can be further divided into sub-levels based on different vertex-states (Economou et al. 2015), although this will not be necessary in this paper. It is clear that the number of states in level \({\mathcal {S}}^{(i)}\) is the combinatorial number \(\#{\mathcal {S}}^{(i)}=\binom {N}{i}(M-1)^{N-i}=\frac {N!}{i!(N-i)!}(M-1)^{N-i}\), which is consistent since

$$\begin{array}{@{}rcl@{}} \#{\mathcal{S}} &=& \sum\limits_{i=0}^{N}\#{\mathcal{S}}^{(i)} \ = \ \sum\limits_{i=0}^{N} \binom{N}{i}(M-1)^{N-i} \ =\ M^{N}. \end{array} $$

One can denote then by \({\mathcal {S}}^{(i)}=\left \{S^{(i)}_{1},\dots,S^{(i)}_{\#{\mathcal {S}}^{(i)}}\right \}\), for 0≤iN, the states contained inside each level. Here, \(S^{(i)}_{j}\) denotes the jth state inside level \({\mathcal {S}}^{(i)}\), according to some arbitrary order, for \(1\leq j\leq \#{\mathcal {S}}^{(i)}\). Then, by ordering states in \({\mathcal {S}}\) in terms of levels, \({\mathcal {S}}^{(0)}\prec {\mathcal {S}}^{(1)}\prec \dots \prec {\mathcal {S}}^{(N)}\), the matrix Q can be written as

$$\begin{array}{@{}rcl@{}} \textbf{Q} &=& \left(\begin{array}{cccc} \textbf{Q}_{00} & \textbf{Q}_{01} & \dots & \textbf{Q}_{0N}\\ \textbf{Q}_{10} & \textbf{Q}_{11} & \dots & \textbf{Q}_{1N}\\ \vdots & \vdots & \ddots & \vdots\\ \textbf{Q}_{N0} & \textbf{Q}_{N1} & \dots & \textbf{Q}_{NN} \end{array}\right), \end{array} $$
(3)

where each sub-matrix Qij contains the infinitesimal transition rates from states in level \({\mathcal {S}}^{(i)}\) to states in level \({\mathcal {S}}^{(j)}\). We note here that \({\mathcal {S}}^{(N)}=\{(M,\dots,M)\}\) has a single state, so that the matrices QNj are in fact row vectors, matrices QjN are column vectors, and QNN is a single-element matrix.

Instead of analysing the master Eq., (1), directly, one can analyse the dynamics of the process \({\mathcal {X}}\) by focusing on a number of summary statistics or stochastic descriptors. In the area of mathematical epidemiology, and when analysing the spread of a disease through a population by means of network dynamics, these summary statistics could represent the size of the outbreak (Economou et al. 2015), the time until the epidemic dies out (Economou et al. 2015), the area under the curve of infectives (Ball and Clancy 1995), or the probability of infection for a particular node of interest (López-García 2016). We refer the reader to (Keeling and Ross 2007) for a discussion on how exact and matrix-oriented approaches can be implemented to compute summary statistics of interest in continuous-time Markov chains.

Summary statistics

We focus in this section on quantities of interest related to absorption. That is, the arrival of the dynamical process to a particular absorbing state, so that the process remains indefinitely in this state after arrival. However, we point out that our arguments would be easily generalisable to other situations, such as the computation of steady-state probabilities for positive recurrent CTMCs (Buchholz, P 1994; Kulkarni 2016), or other summary statistics or quantities of interest (López-García 2016; Economou et al. 2015), depending on the particular network dynamics under study.

Probabilities of absorption

Let us consider that the process \({\mathcal {X}}\) has two absorbing states, so that

$$\begin{array}{@{}rcl@{}} {\mathcal{S}} &=& {\mathcal{C}}\cup\{S_{a}\}\cup\{S_{b}\}, \end{array} $$

where Sa and Sb are two arbitrary absorbing states and \({\mathcal {C}}=\{S_{1},S_{2},\dots,S_{M^{N}-2}\}\) is the class of transient states. An example of this situation is provided in the “Biased voter model” section. We note that although we consider here two absorbing states in our system, our arguments generalise to processes with more than two absorbing states. We are interested in computing the probabilities of absorption into Sa from an initial state Si,

$$\begin{array}{@{}rcl@{}} p_{S_{i}}^{(a)} &=& \lim\limits_{t\rightarrow+\infty}\mathbb{P}(\textbf{X}(t)=S_{a}\ |\ \textbf{X}(0)=S_{i}) \ =\ 1-\lim\limits_{t\rightarrow+\infty}\mathbb{P}(\textbf{X}(t)=S_{b}\ |\ \textbf{X}(0)=S_{i})\\ &=& 1-p_{S_{i}}^{(b)},\quad \forall S_{i}\in{\mathcal{C}}. \end{array} $$

Probabilities \(\{p_{S_{i}}^{(a)}:\ S_{i}\in {\mathcal {C}}\}\) can be computed by following a first-step argument, which yields the system of linear equations

$$\begin{array}{@{}rcl@{}} p_{S_{i}}^{(a)} &=& \frac{1}{\Delta_{S_{i}}}\sum\limits_{S_{j}\in A(S_{i})}q_{S_{i}\rightarrow S_{j}}p_{S_{j}}^{(a)},\quad \forall S_{i}\in{\mathcal{C}}, \end{array} $$
(4)

where \(\Delta _{S_{i}}=\sum \limits _{S_{k}\in A(S_{i})}q_{S_{i}\rightarrow S_{k}}\), and with boundary conditions \(p^{(a)}_{S_{a}}=1\) and \(p^{(a)}_{S_{b}}=0\). Since each equation in (4) corresponds to an initial transient state \(S_{i}\in {\mathcal {C}}\), the system (4) can be written in matrix form as

$$\begin{array}{@{}rcl@{}} \textbf{p}^{(a)} &=& \textbf{A}\textbf{p}^{(a)}+\textbf{b}^{(a)}. \end{array} $$
(5)

For \(S_{i},S_{j}\in \mathcal {C}\), the ijth component of the matrix A is the one-step jump transition probability \(q_{S_{i}\rightarrow S_{j}}/\Delta _{S_{i}}\), the ith component of the column vector p(a) is the probability \(p_{S_{i}}^{(a)}\) and the ith component of the column vector b(a) is the one-step jump transition probability \(q_{S_{i}\rightarrow S_{a}}/\Delta _{S_{i}}\), which might be zero if SaA(Si).

Time until absorption

Let us now consider the alternative situation where the process \({\mathcal {X}}\) has only one (arbitrary) absorbing state, Sa, so that

$$\begin{array}{@{}rcl@{}} {\mathcal{S}} &=& {\mathcal{C}}\cup\{S_{a}\}, \end{array} $$

where \({\mathcal {C}}=\{S_{1},S_{2},\dots,S_{M^{N}-1}\}\) is the class of transient states. We note that if one has several absorbing states in the process, and is interested in the time until absorption (into any absorbing state), one way to proceed is to combine all these absorbing states into an absorbing macro-state, and study the time until absorption into this macro-state.

Here, \(\lim \limits _{t\rightarrow +\infty }\mathbb {P}(\textbf {X}(t)=S_{a})=1\) regardless of the initial state X(0). Our interest is instead in analysing the time until absorption, \(T=\inf \{t\geq 0:\ \textbf {X}(t)=S_{a}\}\). To this end, we can define its Laplace-Stieltjes transform as \(\phi _{S_{i}}(z)=E[e^{-zT}\ |\ \textbf {X}(0)=S_{i}]\), for \(\Re (z)\geq 0\) and \(S_{i}\in {\mathcal {S}}\), and by a first-step argument one can obtain the system of linear equations

$$\begin{array}{@{}rcl@{}} \phi_{S_{i}}(z) &=& \frac{1}{z+\Delta_{S_{i}}}\sum\limits_{S_{j}\in A(S_{i})}q_{S_{i}\rightarrow S_{j}}\phi_{S_{j}}(z),\quad \forall S_{i}\in{\mathcal{C}},\ \Re(z)\geq0, \end{array} $$
(6)

with boundary conditions \(\phi _{S_{a}}(z)=1\) for all \(\Re (z)\geq 0\). This system of equations allows one to compute any order moment of T by direct differentiation, since given any initial state \(S_{i}\in {\mathcal {S}}\), \(m^{(k)}_{S_{i}}=E[T^{k}\ |\ \textbf {X}(0)=S_{i}]=(-1)^{k}\left.\frac {d^{k}\phi _{S_{i}}(z)}{dz^{k}}\right |_{z=0}\). For example, for k=1 the first order moments \(m^{(1)}_{S_{i}}=E[T\ |\ \textbf {X}(0)=S_{i}]\) can be computed from the system of equations

$$\begin{array}{@{}rcl@{}} m^{(1)}_{S_{i}} &=& \frac{1}{\Delta_{S_{i}}}\left(1+\sum\limits_{S_{j}\in A(S_{i})}q_{S_{i}\rightarrow S_{j}}m^{(1)}_{S_{j}}\right),\quad \forall S_{i}\in{\mathcal{C}}, \end{array} $$
(7)

with boundary conditions \(m^{(1)}_{S_{a}}=0\). The system of equations given by (7) can be expressed in matrix form as

$$\begin{array}{@{}rcl@{}} \textbf{m}^{(1)} &=& \textbf{B}\textbf{m}^{(1)}+\textbf{c}, \end{array} $$
(8)

where m(1) is a column vector containing the mean times \(m^{(1)}_{S_{i}}\) for the different initial states \(S_{i}\in {\mathcal {C}}\). For \(S_{i},S_{j}\in \mathcal {C}\), the ijth component of the matrix B is the one-step jump transition probability \(q_{S_{i}\rightarrow S_{j}}/\Delta _{S_{i}}\) and the ith component of the column vector c is \(1/\Delta _{S_{i}}\).

In fact, the time to absorption T is known to follow a phase-type distribution (He 2014, Chapter 1) PH(τ,T), where instead of considering a single initial state \(S\in {\mathcal {S}}\), one can consider a row vector τ of initial probabilities for states \(S\in {\mathcal {C}}\), and T is the sub-matrix of the infinitesimal generator corresponding to transient states in \({\mathcal {C}}\). The distribution function of T, for any t≥0, is given by (He 2014, Definition 1.2.1)

$$\begin{array}{@{}rcl@{}} F_{T}(t) &=& \mathbb{P}(T\leq t) \ = \ 1-{\boldsymbol \tau}e^{\textbf{T}t}\textbf{1},\quad t\geq0, \end{array} $$

where 1 is a column vector of ones, and any kth order moment of T can be obtained as (He 2014, Proposition 1.2.2)

$$\begin{array}{@{}rcl@{}} E[T^{k}] \rightarrow E[T^{k~} |~ \mathbf{X}(0)\sim \tau ] = (-1)^{k} k!\tau \mathrm{T}^{-k}1, \,\,\,\,\,\,\, k \geq 1 \end{array} $$

We note that for k=1, this is equivalent to solving system (8).

Graph-automorphism lumping

The main problem when dealing with (1) is its cardinality, where the number of ODEs is in correspondence with the number of states, MN. This same problem affects to the computation of the summary statistics of interest, and in particular systems (4) and (7). Building on recent work (Kiss et al. 2017; Simon et al. 2011; Ward and Evans 2019), we now describe how network symmetries can be used to reduce the size of the CTMC state-space and how this affects the summary statistics described in “Summary statistics” section.

A partition of state-space \({\mathcal {L}}=\{L_{1},L_{2},\dots,L_{r}\}\) is called a lumping if, roughly speaking, one can represent the original dynamics across states in \({\mathcal {S}}\), with dynamics across states in \({\mathcal {L}}\), while preserving the Markov property (Kemeny and Snell 1960). We define the collector matrix\(\textbf {V}\in \{0,1\}^{M^{N}\times r}\) (Buchholz, P 1994) whose ijth component is

$$ \textbf{V}_{ij}=\left\{\begin{array}{cc} 1 & \text{if \(S_{i}\in L_{j}\)}\\ 0 & \text{otherwise} \end{array}\right.. $$
(9)

More precisely, a necessary and sufficient condition for \({\mathcal {L}}\) to be a lumping (Kemeny and Snell 1960) is that for each Li and Lj, there exists a \(\hat {q}_{L_{i}\rightarrow L_{j}}\) that satisfies

$$\begin{array}{*{20}l} \hat{q}_{L_{i}\rightarrow L_{j}}&=(\textbf{Q}\textbf{V})_{kj}\;,\\ &=\sum\limits_{S_{l}\in L_{j}}q_{S_{k}\rightarrow S_{l}},\quad\text{for all} S_{k} \text{in} L_{i}. \end{array} $$

Also, following (Buchholz, P 1994), if \({\mathcal {L}}\) is a lumping then we say that it is a strict lumping if for each Li and Lj, there also exists a \(\tilde {q}_{L_{i}\rightarrow L_{j}}\) that satisfies

$$\begin{array}{*{20}l} \tilde{q}_{L_{i}\rightarrow L_{j}}&=(\textbf{Q}^{\mathrm{T}}\textbf{V})_{li}\;,\\ &=\sum\limits_{S_{k}\in L_{i}}q_{S_{k}\rightarrow S_{l}},\quad\text{for all} S_{l} \text{in} L_{j}. \end{array} $$

We now show how one can derive a lumping of the state-space from symmetries of the network, by extending the arguments in (Kiss et al. 2017; Simon et al. 2011; Ward and Evans 2019), while providing a matrix-oriented representation of the lumping. An automorphism of a network \({\mathcal {G}}\) is a bijection \(g:{\mathcal {V}}\rightarrow {\mathcal {V}}\) such that \((u,v)\in {\mathcal {E}}\) if and only if \((g(u),g(v))\in {\mathcal {E}}\). We use the shorthand gv=g(v). The set of automorphisms of a network \({\mathcal {G}}\) form a permutation group \(H=\text {Aut}({\mathcal {G}})\) called the automorphism group of \({\mathcal {G}}\) (Godsil and Royle 2013). Automorphisms of the network permute vertices, so we need to define how they act on the state-space. Let H be the automorphism group of \({\mathcal {G}}\), then we define the action of gH on a state \(S_{i}\in \mathcal {S}\) to be

$$ (g S_{i})(v)=S_{i}(g^{-1}v)\quad\text{for all} v\in {\mathcal{V}}, $$
(10)

i.e. the vertex-state of v in gSi is the same as the vertex-state of g−1v in Si. It is easy to prove that this is indeed an action of H on \(\mathcal {S}\) in the group theoretic sense. It follows from our definition (10) of the action of H on state-space that \({\mathcal {S}}\) is an H-set (Fraleigh 2003). This means that we can partition the state-space into equivalence classes where a pair of states \(S_{i},S_{j}\in {\mathcal {S}}\) are equivalent if and only if there is a gH such that Si=gSj. This partition is known as the orbit partition of the state-space.

Theorem 1.

Let \(\mathcal {G}=(\mathcal {V},\mathcal {E})\) be a finite network with automorphism group H and \(\mathcal {W}\) be a non-empty and finite set of vertex-states. Let \(\mathcal {S}=\mathcal {W}^{\mathcal {V}}\) be the state-space of a CTMC with infinitesimal generator Q, as defined in (2), with the action of H on \(\mathcal {S}\)defined in (10). If there is a subgroup GH such that

$$ q_{S_{k}\rightarrow S_{l}}=q_{gS_{k}\rightarrow gS_{l}} \quad\text{for all} g\in G \text{and} S_{k},S_{l}\in\mathcal{S}, $$
(11)

then the orbit partition of \(\mathcal {S}\) under the action of G is a strict lumping of \(\mathcal {S}\).

Proof

Let \({\mathcal {L}}=\{L_{1},L_{2},\dots,L_{r}\}\) be the orbit partition of the state-space \(\mathcal {S}\) under the action of G. Suppose that SkLi and SlLj. For any SmLi, we can find a gG such that Sm=gSk; thus let Sn=gSlLj. By assumption, \(q_{S_{k}\rightarrow S_{l}}=q_{S_{m}\rightarrow S_{n}}\). Since \(\mathcal {S}\) is a G-set, it follows that g is a bijection on Lj. Thus for all Sk,SmLi we have

$$\sum_{S_{l} \in L_{j}}q_{S_{k}\rightarrow S_{l}}=\sum_{S_{n}\in L_{j}}q_{S_{m}\rightarrow S_{n}}.$$

A similar argument can be used to show that for all Sl,SnLj we have

$$\sum_{S_{k} \in L_{i}}q_{S_{k}\rightarrow S_{l}}=\sum_{S_{m}\in L_{i}}q_{S_{m}\rightarrow S_{n}}.$$

Consequently \({\mathcal {L}}\) is a strict lumping of \(\mathcal {S}\). □

Thus we can make use of network automorphism lumping whenever (11) is satisfied. It has been shown recently that a broad class of ‘homogeneous’ network dynamics models satisfy this criterion for the whole automorphism group H (Kiss et al. 2017; Simon et al. 2011; Ward and Evans 2019). In this paper, we go further and show that certain ‘heterogeneous’ models that respect (at least some) symmetries of the network also satisfy (11). It is straightforward to prove that a sub-set of H that satisfies (11) forms a group, and so this can be used to lump the state-space in accordance with Theorem 1. In what follows, we will assume that the models under consideration satisfy (11) (that is, that there are some symmetries in the network that can be exploited for lumping purposes).

We will now show how the lumping of the state-space reduces the dimensionality of the master equation (Kiss et al. 2017; Simon et al. 2011), and how this can be done in a matrix-oriented fashion. We define the distributor matrix\(\textbf {W}\in \mathbb {R}^{r\times M^{N}}\) (Buchholz, P 1994), whose ijth component is

$$ \textbf{W}_{ij}=\left\{\begin{array}{cc} \frac{1}{\#L_{i}} & \text{if \(S_{j}\in L_{i}\)}\\ 0 & \text{otherwise} \end{array}\right.. $$
(12)

It is easy to show that WV=Ir, where Ir is the r×r identity matrix, and

$$ (\textbf{V}\textbf{W})_{ij}=\left\{\begin{array}{cc} \frac{1}{\#L_{k}} & \text{if \(S_{i},S_{j}\in L_{k}\)}\\ 0 & \text{otherwise} \end{array}\right.. $$

Furthermore, VW commutes with Q, which follows from the fact that \({\mathcal {L}}\) is a strict lumping of \({\mathcal {S}}\). To see this, note that for any \(L_{i},L_{j}\in \mathcal {L}\) we have

$$\begin{array}{*{20}l} \sum_{S_{k}\in L_{i}}\sum_{S_{l}\in L_{j}}q_{S_{k}\rightarrow S_{l}} &=\# L_{i}\sum_{S_{l}\in L_{j}}q_{S_{k}\rightarrow S_{l}}\quad\text{for any} S_{k}\in L_{i},\\ &=\# L_{j}\sum_{S_{k}\in L_{i}}q_{S_{k}\rightarrow S_{l}}\quad\text{for any} S_{l}\in L_{j}. \end{array} $$

But for any SkLi and SlLj we have

$$\begin{array}{*{20}l} (\textbf{Q}\textbf{V}\textbf{W})_{kl}&=\sum_{n}\textbf{Q}_{kn}(\textbf{V}\textbf{W})_{nl}=\frac{1}{\# L_{j}}\sum_{S_{n}\in L_{j}}q_{S_{k}\rightarrow S_{n}}\quad\text{and}\\ (\textbf{V}\textbf{W}\textbf{Q})_{kl}&=\sum_{m}(\textbf{V}\textbf{W})_{km}\textbf{Q}_{ml}=\frac{1}{\# L_{i}}\sum_{S_{m}\in L_{i}}q_{S_{m}\rightarrow S_{l}}, \end{array} $$

thus it follows that QVW=VWQ.

Denote the lumped transition probability by \(\hat {p}_{L_{j},L_{i}}(t)=\mathbb {P}(\textbf {X}(t)\in L_{i}\ |\ \textbf {X}(0)\in L_{j})\), \(\forall L_{i},L_{j}\in {\mathcal {L}}\) and t≥0, and let \(\hat {\mathbf {P}}(t)= (\hat {p}_{L_{j},L_{i}}(t))_{i\in \{1,\dots,r\},\ j\in \{1,\dots,r\}}\). Thus \(\hat {\mathbf {P}}=\textbf {W}\textbf {P}\textbf {V}\) and with the properties described above it is easy to derive the lumped master equation

$$\begin{array}{@{}rcl@{}} \frac{d\hat{\mathbf{P}}(t)}{dt} &=& \hat{\mathbf{P}}(t)\hat{\mathbf{Q}}, \end{array} $$
(13)

where \(\hat {\mathbf {Q}}=\mathbf {W}\mathbf {Q}\mathbf {V}\) is the lumped infinitesimal generator. Note that one must use the same ordering of states in \({\mathcal {S}}\) when constructing the matrices W, Q and V; see the example in the “Applications to classes of model” section.

To differentiate between the CTMCs on the full state-space and the lumped state-space, we will refer to the states of the lumped state-space as orbits in the remainder of this paper. This highlights the fact that the lumped state-space results from the orbit partition of the unlumped state-space under the action of a sub-group of the network automorphism group, as per Theorem 1.

Lumping and summary statistics

Lumping and probabilities of absorption

We now return to the situation described in the “Probabilities of absorption” section where we assume the state-space has two absorbing states Sa and Sb, and the set of transient states is \({\mathcal {C}}\). We assume that the subgroup G of the network \({\mathcal {G}}\) satisfying (11) is non-trivial, and that \({\mathcal {L}}=\{L_{1},L_{2},\dots,L_{r}\}\) is the orbit partition of \(\mathcal {C}\) under the action of G. We also assume that Sa and Sb are fixed by all elements in G so that they cannot be lumped together. It follows from the definition of lumping that for every \(L_{i}\in \mathcal {L}\) and Sk,SlLi, \(\Delta _{S_{k}}=\Delta _{S_{l}}\), thus for any SkLi we define \(\Delta _{L_{i}}=\Delta _{S_{k}}\).

Let \(\hat {p}_{L_{i}}^{(a)}\) be the probability of absorption into Sa from the lumping cell (i.e., orbit) \(L_{i}\in {\mathcal {L}}\). These probabilities can be obtained from the lumped transition rates \(\hat {q}_{L_{i}\rightarrow L_{j}}\) in the manner described in the “Probabilities of absorption” section, resulting in the system of equations

$$ \hat{\mathbf{p}}^{(a)}=\hat{\mathbf{A}}\hat{\mathbf{p}}^{(a)}+\hat{\mathbf{b}}^{(a)}. $$

For \(L_{i},L_{j}\in \mathcal {L}\), the ijth component of the matrix \(\hat {\mathbf {A}}\) is the one-step jump transition probability \(\hat {q}_{L_{i}\rightarrow L_{j}}/\Delta _{L_{i}}\), the ith component of the column vector \(\hat {\mathbf {p}}^{(a)}\) is the probability \(\hat {p}_{L_{i}}^{(a)}\) and the ith component of the column vector \(\hat {\mathbf {b}}^{(a)}\) is the one-step jump transition probability \(\hat {q}_{L_{i}\rightarrow S_{a}}/\Delta _{L_{i}}\), which might be zero if Sa cannot be accessed in one jump from Li.

We now show that the probabilities of absorption are the same for all states within the same lumping cell. An automorphism of the network gG is a permutation of the vertices in \(\mathcal {V}\), thus with the action (10) we can define Πg to be the representation of g as a permutation matrix on the transient states \(\mathcal {C}\). When (11) is satisfied, we have ΠgA=AΠg and \(\mathbf {\Pi }_{g}\mathbf {b}^{(a)}=\textbf {b}^{(a)}\). Consequently

$$ \mathbf{\Pi}_{g}(\textbf{I}_{\#\mathcal{C}}-\textbf{A})\textbf{p}^{(a)}=(\textbf{I}_{\#\mathcal{C}}-\textbf{A})\mathbf{\Pi}_{g}\textbf{p}^{(a)}=\textbf{b}^{(a)}, $$

and hence \(\textbf {p}^{(a)}=\mathbf {\Pi }_{g}\textbf {p}^{(a)}\). Since the solution to \((\textbf {I}_{\#\mathcal {C}}-\textbf {A})\textbf {p}^{(a)}=\textbf {b}^{(a)}\) is unique, it follows that for any \(L_{i}\in \mathcal {L}\) and Sk,SlLi, \(\hat {p}_{L_{i}}^{(a)}=p_{S_{k}}^{(a)}=p_{S_{l}}^{(a)}\).

Lumping and time to absorption

We now consider the time to absorption for the situation described in the “Time until absorption” section, where there is one absorbing state Sa and the set of transient states is \({\mathcal {C}}\). If one orders states as

$$S_{1} \prec S_{2} \prec \dots \prec S_{M^{N}-1} \prec S_{a}, $$

the infinitesimal generator is given by

$$\begin{array}{@{}rcl@{}} \textbf{Q} &=& \left(\begin{array}{cccccc} -\Delta_{S_{1}} & q_{S_{1}\rightarrow S_{2}} & \dots & q_{S_{1}\rightarrow S_{M^{N}-1}} & q_{S_{1}\rightarrow S_{a}}\\ q_{S_{2}\rightarrow S_{1}} & -\Delta_{S_{2}} & \dots & q_{S_{2}\rightarrow S_{M^{N}-1}} & q_{S_{2}\rightarrow S_{a}}\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ q_{S_{M^{N}-1}\rightarrow S_{1}} & q_{S_{M^{N}-1}\rightarrow S_{2}} & \dots & -\Delta_{S_{M^{N}-1}} & q_{S_{M^{N}-1}\rightarrow S_{a}}\\ 0 & 0 & \dots & 0 & 0\end{array}\right)\\ &=& \left(\begin{array}{cc} \textbf{T} & \textbf{t}\\ \textbf{0} & \textbf{0} \end{array}\right). \end{array} $$

Again we assume that the subgroup G of the network \({\mathcal {G}}\) is non-trivial, and that \({\mathcal {L}}=\{L_{1},L_{2},\dots,L_{r}\}\) is the orbit partition of \(\mathcal {C}\) under the action of G. Necessarily, Sa is fixed by all elements in G, and we can consider Lr=Sa is the orbit with this single absorbing state. Let V and W be the collector and distributor matrices of \(\mathcal {C}\) respectively, which have the form

$$\begin{array}{@{}rcl@{}} \textbf{V} &=& \left(\begin{array}{lllll} v_{11} & v_{12} & \dots & v_{1,r-1} & 0\\ v_{21} & v_{22} & \dots & v_{2,r-1} & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots\\ v_{M^{N}-1,1} & v_{M^{N}-1,2} & \dots & v_{M^{N}-1,r-1} & 0\\ 0 & 0 & \dots & 0 & 1 \end{array}\right) \ = \ \left(\begin{array}{ll} \tilde{\mathbf{V}} & \textbf{0}\\ \textbf{0} & 1 \end{array}\right),\\ \textbf{W} &=& \left(\begin{array}{lllll} w_{11} & w_{12} & \dots & w_{1,M^{N}-1} & 0\\ w_{21} & w_{22} & \dots & w_{2,M^{N}-1} & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots\\ w_{r-1,1} & w_{r-1,2} & \dots & w_{r-1,M^{N}-1} & 0\\ 0 & 0 & \dots & 0 & 1 \end{array}\right) \ = \ \left(\begin{array}{ll} \tilde{\mathbf{W}} & \textbf{0}\\ \textbf{0} & 1 \end{array}\right). \end{array} $$

Recall that the distribution of the time to absorption T is given by a phase-type distribution PH(τ,T), where τ is a row vector of the initial probabilities of being in states in \(\mathcal {C}\) and T is the matrix of transition rates between states in \(\mathcal {C}\), as described above. It is well known that a phase-type distribution is not uniquely defined by the pair (τ,T), and we will show that the distribution function of the time to absorption T is the same for the full and lumped Markov chains. Let \(\boldsymbol {\mu }=\boldsymbol {\tau }\tilde {\mathbf {V}}\) and \(\textbf {U}=\tilde {\mathbf {W}}\textbf {T}\tilde {\mathbf {V}},\) so U is the matrix of lumped one-jump transition rates between (transient) states in \(\{L_{1},L_{2},\dots,L_{r-1}\}\). Let 1k be the column vector with k elements all equal to 1, then \(\mathbf {1}_{r}=\tilde {\mathbf {W}}\mathbf {1}_{\#\mathcal {C}}\) and \(\tilde {\mathbf {V}}\tilde {\mathbf {W}}\mathbf {1}_{\# \mathcal {C}}=\mathbf {1}_{\# \mathcal {C}}\). An additional consequence of the fact that VW commutes with Q, and hence T, is that \((\tilde {\mathbf {W}} \textbf {T}\tilde {\mathbf {V}})^{n}=\tilde {\mathbf {W}}\textbf {T}^{n}\tilde {\mathbf {V}}\), which is easily proved by induction. It follows that \(\exp (\textbf {U}t)=\tilde {\mathbf {W}}\exp (\textbf {T}t)\tilde {\mathbf {V}}\), \(\tilde {\mathbf {V}}\tilde {\mathbf {W}}\) commutes with \(\exp (\textbf {T}t)\) and consequently

$$\begin{array}{*{20}l} F_{T}(t)&=1-\boldsymbol{\tau}\exp(\textbf{T}t)\mathbf{1}_{\# {\mathcal{C}}},\\ &=1-\boldsymbol{\tau}\exp(\textbf{T}t)\tilde{\mathbf{V}}\tilde{\mathbf{W}}\mathbf{1}_{\# {\mathcal{C}}},\\ &=1-\boldsymbol{\tau} \tilde{\mathbf{V}}\tilde{\mathbf{W}}\exp(\textbf{T}t)\tilde{\mathbf{V}}\tilde{\mathbf{W}}\mathbf{1}_{\# {\mathcal{C}}},\\ &=1-\boldsymbol{\mu}\exp(\textbf{U}t)\mathbf{1}_{r}. \end{array} $$

Thus the distribution of time to absorption into Sa is the same for both (τ,T) and (μ,U). It follows that all moments of T will be the same for both the full and lumped state-spaces. Moreover the mean time to absorption for any pair of states in the same lumping cell are the same, which can be shown in a manner analogous to that used to show that the probabilities of absorption are the same.

Single-vertex transition processes

We focus in this section on what we call single-vertex-transition (SVT) processes, which satisfy the condition that for every pair of states \(S_{i}\in {\mathcal {S}}\) and SjA(Si), U(Si,Sj) has cardinality #U(Si,Sj)=1. This means that transitions between states, according to the CTMC \({\mathcal {X}}\), can only occur between states Si and Sj that differ in one vertex-state (that is, in each single jump, only one vertex-state is updated). If the states in \(\mathcal {S}\) are ordered by levels, as described in the “CTMC dynamics on networks” section, then it is clear that non-zero infinitesimal transition rates can only occur between any given state \(S\in {\mathcal {S}}^{(i)}\) and states in either the same level \({\mathcal {S}}^{(i)}\) (provided that \(\#\mathcal {W}>2\)), or states in adjacent levels\({\mathcal {S}}^{(i-1)}\) and \({\mathcal {S}}^{(i+1)}\). This means that (3) becomes

$$\begin{array}{@{}rcl@{}} \textbf{Q} &=& \left(\begin{array}{cccccc} \textbf{Q}_{00} & \textbf{Q}_{01} & \textbf{0} & \dots & \textbf{0} & \textbf{0}\\ \textbf{Q}_{10} & \textbf{Q}_{11} & \textbf{Q}_{12} & \dots & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{Q}_{21} & \textbf{Q}_{22} & \dots & \textbf{0} & \textbf{0}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{Q}_{N-1N-1} & \textbf{Q}_{N-1N}\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{Q}_{NN-1} & \textbf{Q}_{NN}\\ \end{array}\right), \end{array} $$
(14)

so that Q is tridiagonal-by-blocks, leading to a quasi-birth-and-death (QBD) process (Latouche and Ramaswami 1999).

Probabilities of absorption

Let us now consider the same situation as the one analysed in the “Probabilities of absorption” section, where the process \({\mathcal {X}}\) has two absorbing states. For convenience, let us assume that the two absorbing states are \(S_{a}=S^{(0)}_{1}\) (i.e., the first state in level \({\mathcal {S}}^{(0)}\)) and \(S_{b}=(M,\dots,M)\) (i.e., the only state in level \({\mathcal {S}}^{(N)}\)), so that

$$\begin{array}{@{}rcl@{}} {\mathcal{S}} &=& {\mathcal{C}}\cup\{S^{(0)}_{1}\}\cup\{(M,\dots,M)\} \end{array} $$

where \({\mathcal {C}}\) is the class of transient states. The motivation for assuming these particular choices of Sa and Sb is purely to place them at the beginning and end respectively of the order of states in state-space. Given the structure by levels above, it is clear that

$$\begin{array}{@{}rcl@{}} {\mathcal{C}} &=& {\cal \hat S}^{(0)}\cup\left(\bigcup\limits_{i=1}^{N-1}{\mathcal{S}}^{(i)}\right), \end{array} $$

where \({\cal \hat S}^{(0)}=\{S^{(0)}_{2},\dots,S^{(0)}_{\#{\mathcal {S}}^{(0)}}\}\) is just the level \({\mathcal {S}}^{(0)}\) without the absorbing state \(S^{(0)}_{1}\).

Since the state \(S^{(0)}_{1}\) is absorbing, \(A(S^{(0)}_{1})=\emptyset \), thus we note that

$$\begin{array}{@{}rcl@{}} \textbf{Q}_{00} &=& \left(\begin{array}{c|cccc} 0 & 0 & \dots & 0 & 0\\ q_{S^{(0)}_{2}\rightarrow S^{(0)}_{1}} & -\Delta_{S^{(0)}_{2}} & \dots & q_{S^{(0)}_{2}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}} & q_{S^{(0)}_{2}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}}}\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}\rightarrow S^{(0)}_{1}} & q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}\rightarrow S^{(0)}_{2}} & \dots & -\Delta_{S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}} & q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}}}\\ q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}\rightarrow S^{(0)}_{1}} & q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}\rightarrow S^{(0)}_{2}} & \dots & q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}} & -\Delta_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}} \end{array}\right)\\ &=& \left(\begin{array}{cc} 0 & \textbf{0}\\ \textbf{q}^{(a)}_{0} & \hat{\mathbf{Q}}_{00} \end{array}\right), \end{array} $$
$$\begin{array}{@{}rcl@{}} \textbf{Q}_{01} &=& \left(\begin{array}{lllll} 0 & 0 & \dots & 0 & 0\\ q_{S^{(0)}_{2}\rightarrow S^{(1)}_{1}} & q_{S^{(0)}_{2}\rightarrow S^{(1)}_{2}} & \dots & q_{S^{(0)}_{2}\rightarrow S^{(1)}_{\#{\mathcal{S}}^{(1)}-1}} & q_{S^{(0)}_{2}\rightarrow S^{(1)}_{\#{\mathcal{S}}^{(1)}}}\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}\rightarrow S^{(1)}_{1}} & q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}\rightarrow S^{(1)}_{2}} & \dots & q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}\rightarrow S^{(1)}_{\#{\mathcal{S}}^{(1)}-1}} & q_{S^{(0)}_{\#{\mathcal{S}}^{(0)}}\rightarrow S^{(1)}_{\#{\mathcal{S}}^{(1)}}} \end{array}\right)\\ &=& \left(\begin{array}{c} \textbf{0}\\ \hat{\mathbf{Q}}_{01} \end{array}\right), \end{array} $$
$$\begin{array}{@{}rcl@{}} \textbf{Q}_{10} &=& \left(\begin{array}{cccccc} q_{S^{(1)}_{1}\rightarrow S^{(0)}_{1}} & q_{S^{(1)}_{1}\rightarrow S^{(0)}_{2}} & \dots & q_{S^{(1)}_{1}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}} & q_{S^{(1)}_{1}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}}}\\ q_{S^{(1)}_{2}\rightarrow S^{(0)}_{1}} & q_{S^{(1)}_{2}\rightarrow S^{(0)}_{2}} & \dots & q_{S^{(1)}_{2}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}} & q_{S^{(1)}_{2}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}}}\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}-1}\rightarrow S^{(0)}_{1}} & q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}-1}\rightarrow S^{(0)}_{2}} & \dots & q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}-1}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}} & q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}-1}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}}}\\ q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}}\rightarrow S^{(0)}_{1}} & q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}}\rightarrow S^{(0)}_{2}} & \dots & q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}-1}} & q_{S^{(1)}_{\#{\mathcal{S}}^{(1)}}\rightarrow S^{(0)}_{\#{\mathcal{S}}^{(0)}}}\\ \end{array}\right) \end{array} $$
$$\begin{array}{@{}rcl@{}} &=& \left(\begin{array}{cc} \textbf{q}^{(a)}_{1} & \hat{\mathbf{Q}}_{10} \end{array}\right), \end{array} $$

and QNN−1=0 and QNN=0, since state \((M,\dots,M)\) is also absorbing. The infinitesimal generator of the process \({\mathcal {X}}\) in this scenario is then

$$\begin{array}{*{20}l} \textbf{Q} &= \left(\begin{array}{ccccccccc} 0 & \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{0} & \textbf{0}\\ \textbf{q}_{0}^{(a)} & \hat{\mathbf{Q}}_{00} & \hat{\mathbf{Q}}_{01} & \textbf{0} & \dots & \textbf{0} & \textbf{0}\\ \textbf{q}_{1}^{(a)} & \hat{\mathbf{Q}}_{10} & \textbf{Q}_{11} & \textbf{Q}_{12} & \dots & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{0} & \textbf{Q}_{21} & \textbf{Q}_{22} & \dots & \textbf{0} & \textbf{0}\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{Q}_{N-1N-1} & \textbf{Q}_{N-1N}\\ \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{0} & 0\\ \end{array}\right)\\ &=\left(\begin{array}{ccccc} 0 & \textbf{0} & 0\\ \textbf{q}^{(a)} & \hat{\mathbf{Q}} & \textbf{q}^{(b)} \\ 0 & \textbf{0} & 0 \end{array}\right). \end{array} $$

It is clear that the matrix \(\hat {\mathbf {Q}}\) contains the transition rates between states in the transient class \({\mathcal {C}}\), while the column vectors q(a) and q(b) contain the transition rates from states in the transient class \({\mathcal {C}}\) to the absorbing states \(S_{a}=S^{(0)}_{1}\) and \(S_{b}=(M,\dots,M)\), respectively.

We note here that the system of linear equations given by (4) has a direct correspondence with the structure of the infinitesimal generator Q above. By ordering states in levels as above, one can express (5) as

$$\begin{array}{@{}rcl@{}} \left(\begin{array}{c} \textbf{p}^{(a)}_{0}\\ \textbf{p}^{(a)}_{1}\\ \textbf{p}^{(a)}_{2}\\ \vdots\\ \textbf{p}^{(a)}_{N-1} \end{array}\right) &=& \left(\begin{array}{ccccc} \textbf{A}_{00} & \textbf{A}_{01} & \textbf{0} & \dots & \textbf{0}\\ \textbf{A}_{10} & \textbf{A}_{11} & \textbf{A}_{12} & \dots & \textbf{0}\\ \textbf{0} & \textbf{A}_{21} & \textbf{A}_{22} & \dots & \textbf{0}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{A}_{N-1N-1} \end{array}\right)\left(\begin{array}{c} \textbf{p}^{(a)}_{0}\\ \textbf{p}^{(a)}_{1}\\ \textbf{p}^{(a)}_{2}\\ \vdots\\ \textbf{p}^{(a)}_{N-1} \end{array}\right)+\left(\begin{array}{c} \textbf{b}^{(a)}_{0}\\ \textbf{b}^{(a)}_{1}\\ \textbf{0}\\ \vdots\\ \textbf{0} \end{array}\right). \end{array} $$

Note that elements in the diagonal of each block Aii, for 0≤iN−1, are equal to zero. Moreover, the sub-vectors \(\textbf {b}_{0}^{(a)}\) and \(\textbf {b}_{1}^{(a)}\) are related to the boundary conditions in (4). In particular, each element in \(\textbf {b}_{0}^{(a)}\) (equivalently, \(\textbf {b}_{1}^{(a)}\)) is obtained by dividing the element corresponding to the same row in \(\textbf {q}^{(a)}_{0}\) (equivalently, \(\textbf {q}_{1}^{(a)}\)), which corresponds to a given state S in level \({\cal \hat S}^{(0)}\subset {\mathcal {C}}\) (equivalently, \({\mathcal {S}}^{(1)}\subset {\mathcal {C}}\)), by ΔS.

We note that expressing (5) in the QBD structure above is relevant for solving it efficiently. In particular, instead of solving this system in terms of p(a)=(IA)−1b(a), one can propose a forward-elimination-backward-substitution approach, which leads to Algorithm 1.

Time until absorption

In this section, we apply similar arguments to the ones above in order to efficiently solve (7), which allows one to compute the first order moments of the time to absorption for an SVT process with one absorbing state; see the “Summary statistics” section. For convenience we assume here that the absorbing state is \(S_{a}=(M,\dots,M)\), the only state in level \({\mathcal {S}}^{(N)}\). Thus, the infinitesimal generator is given by

$$\begin{array}{@{}rcl@{}} \textbf{Q} &=& \left(\begin{array}{ccccccc} \textbf{Q}_{00} & \textbf{Q}_{01} & \textbf{0} & \dots & \textbf{0} & \textbf{0}\\ \textbf{Q}_{10} & \textbf{Q}_{11} & \textbf{Q}_{12} & \dots & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{Q}_{21} & \textbf{Q}_{22} & \dots & \textbf{0} & \textbf{0}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{Q}_{N-1N-1} & \textbf{Q}_{N-1N}\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{0} & 0\\ \end{array}\right) \ = \ \left(\begin{array}{cc} \textbf{T} & \textbf{t}\\ \textbf{0} & 0 \end{array}\right). \end{array} $$

Given (7) and the QBD structure of the matrix Q above, and following similar arguments to those in the previous sections, one can rewrite (7) in matrix form as

$$\begin{array}{@{}rcl@{}} \left(\begin{array}{c} \textbf{m}^{(1)}_{0}\\ \textbf{m}^{(1)}_{1}\\ \textbf{m}^{(1)}_{2}\\ \vdots\\ \textbf{m}^{(1)}_{N-1} \end{array}\right) &=& \left(\begin{array}{ccccc} \hat{\mathbf{A}}_{00} & \hat{\mathbf{A}}_{01} & \textbf{0} & \dots & \textbf{0}\\ \hat{\mathbf{A}}_{10} & \hat{\mathbf{A}}_{11} & \hat{\mathbf{A}}_{12} & \dots & \textbf{0}\\ \textbf{0} & \hat{\mathbf{A}}_{21} & \hat{\mathbf{A}}_{22} & \dots & \textbf{0}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \hat{\mathbf{A}}_{N-1N-1} \end{array}\right)\left(\begin{array}{c} \textbf{m}^{(1)}_{0}\\ \textbf{m}^{(1)}_{1}\\ \textbf{m}^{(1)}_{2}\\ \vdots\\ \textbf{m}^{(1)}_{N-1} \end{array}\right)+\left(\begin{array}{c} \textbf{c}_{0}\\ \textbf{c}_{1}\\ \textbf{c}_{2}\\ \vdots\\ \textbf{c}_{N-1}\\ \end{array}\right). \end{array} $$

In the equation above, each vector \(\textbf {m}_{j}^{(1)}\) contains first order moments \(m_{S}^{(1)}\) for states \(S\in {\mathcal {S}}^{(j)}\), for 0≤jN−1. According to (7), each row of each matrix \(\hat {\mathbf {A}}_{jj'}\) (for j{j−1,j,j+1}), which corresponds to a state \(S\in {\mathcal {S}}^{(j)}\phantom {\dot {i}\!}\), is obtained by dividing the same row in \(\textbf {Q}_{jj'}\phantom {\dot {i}\!}\) by ΔS, but with elements in the diagonal equal to zero. Finally, each element of each sub-vector cj, for 0≤jN−1, which corresponds to a state \(S\in {\mathcal {S}}^{(j)}\), is equal to \(\frac {1}{\Delta _{S}}\). It is clear that Algorithm 1 can be adapted for solving this system, leading to Algorithm 2.

Graph-automorphism lumping

It is easy to prove that states in the same orbit belong to the same level for an SVT process, since graph-automorphisms only permute vertex-states. That is, for any 1≤jr, if Sk,SlLj then there exists a 1≤pN such that \(S_{k},S_{l}\in {\mathcal {S}}^{(p)}\). This has a direct implication on the structure of the matrices V and W, and thus the structure of the lumped infinitesimal generator \(\hat {\mathbf {Q}}\) in (13).

Recall that the lumped infinitesimal generator \(\hat {\mathbf {Q}}\) describes the transition rates between orbits, so the particular order in which the orbits are considered has an impact on the structure of \(\hat {\mathbf {Q}}\). Let us organise orbits into groups according to the level of the states within the orbits. That is, given the partition \({\mathcal {L}}=\{L_{1},L_{2},\dots,L_{r}\}\), we consider \(\{\mathcal {L}^{(0)},\mathcal {L}^{(1)},\dots,\mathcal {L}^{(N)}\}\) where \(\mathcal {L}^{(j)}\) contains all the orbits with states in \({\mathcal {S}}^{(j)}\). For example, if there is a lumping corresponding to a partition with r=10 orbits, for N=5, where states in L1 and L2 belong to \({\mathcal {S}}^{(0)}\), states in L3 belong to \({\mathcal {S}}^{(1)}\), states in L4 and L5 belong to \({\mathcal {S}}^{(2)}\), states in L6, L7 and L8 belong to \({\mathcal {S}}^{(3)}\), states in L9 belong to \({\mathcal {S}}^{(4)}\) and states in L10 belong to \({\mathcal {S}}^{(5)}\), then

$$\begin{array}{*{20}l} {\mathcal{L}} &= \{\underbrace{L_{1},L_{2}}_{{\mathcal{L}}^{(0)}},\underbrace{L_{3}}_{{\mathcal{L}}^{(1)}},\underbrace{L_{4},L_{5}}_{{\mathcal{L}}^{(2)}},\underbrace{L_{6},L_{7},L_{8}}_{{\mathcal{L}}^{(3)}},\underbrace{L_{9}}_{{\mathcal{L}}^{(4)}},\underbrace{L_{10}}_{{\mathcal{S}}^{(5)}}\}\\ &=\mathcal{L}^{(0)}\cup \mathcal{L}^{(1)}\cup \mathcal{L}^{(2)}\cup \mathcal{L}^{(3)}\cup \mathcal{L}^{(4)}\cup \mathcal{L}^{(5)}. \end{array} $$

Once orbits and states are ordered as above (by levels), it is easy to check that matrices V and W are diagonal-by-blocks,

$$\begin{array}{@{}rcl@{}} \textbf{V} &=& \left(\begin{array}{cccc} \textbf{V}_{0} & \textbf{0} & \dots & \textbf{0}\\ \textbf{0} & \textbf{V}_{1} & \dots & \textbf{0}\\ \vdots & \vdots & \ddots & \vdots\\ \textbf{0} & \textbf{0} & \dots & \textbf{V}_{N} \end{array}\right),\quad \textbf{W} \ =\ \left(\begin{array}{cccc} \textbf{W}_{0} & \textbf{0} & \dots & \textbf{0}\\ \textbf{0} & \textbf{W}_{1} & \dots & \textbf{0}\\ \vdots & \vdots & \ddots & \vdots\\ \textbf{0} & \textbf{0} & \dots & \textbf{W}_{N} \end{array}\right). \end{array} $$

This means that the graph-automorphism lumping approach, when applied to an SVT, which is a QBD, leads to another QBD, i.e.

$$\begin{array}{@{}rcl@{}} \hat{\mathbf{Q}} &=& \textbf{W}\textbf{Q}\textbf{V} \ = \ \left(\begin{array}{cccccc} \hat{\mathbf{Q}}_{00} & \hat{\mathbf{Q}}_{01} & \textbf{0} & \dots & \textbf{0} & \textbf{0}\\ \hat{\mathbf{Q}}_{10} & \hat{\mathbf{Q}}_{11} & \hat{\mathbf{Q}}_{12} & \dots & \textbf{0} & \textbf{0}\\ \textbf{0} & \hat{\mathbf{Q}}_{21} & \hat{\mathbf{Q}}_{22} & \dots & \textbf{0} & \textbf{0}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \hat{\mathbf{Q}}_{N-1N-1} & \hat{\mathbf{Q}}_{N-1N}\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \hat{\mathbf{Q}}_{NN-1} & \hat{\mathbf{Q}}_{NN} \end{array}\right) \\ &=& \left(\begin{array}{cccccc} \textbf{W}_{0}\textbf{Q}_{00}\textbf{V}_{0} & \textbf{W}_{0}\textbf{Q}_{01}\textbf{V}_{1} & \textbf{0} & \dots & \textbf{0} & \textbf{0}\\ \textbf{W}_{1}\textbf{Q}_{10}\textbf{V}_{0} & \textbf{W}_{1}\textbf{Q}_{11}\textbf{V}_{1} & \textbf{W}_{1}\textbf{Q}_{12}\textbf{V}_{2} & \dots & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{W}_{2}\textbf{Q}_{21}\textbf{V}_{1} & \textbf{W}_{2}\textbf{Q}_{22}\textbf{V}_{2} & \dots & \textbf{0} & \textbf{0}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{W}_{N-1}\textbf{Q}_{N-1N-1}\textbf{V}_{N-1} & \textbf{W}_{N-1}\textbf{Q}_{N-1N}\textbf{V}_{N}\\ \textbf{0} & \textbf{0} & \textbf{0} & \dots & \textbf{W}_{N}\textbf{Q}_{NN-1}\textbf{V}_{N-1} & \textbf{W}_{N}\textbf{Q}_{NN}\textbf{V}_{N} \end{array}\right). \end{array} $$

This also means that Algorithms 1 and 2 can be used when solving the lumped systems of equations in Section 5.1. For Algorithm 2, matrices \(\hat {\mathbf {A}}_{ij}\), for j{i−1,i,i+1}, are replaced by \(\textbf {W}_{i}\hat {\mathbf {A}}_{ij}\textbf {V}_{j}\), and vectors cj are substituted by Wjcj for all 0≤jN−1. In practice, it is better if the matrices \(\textbf {W}_{i}\hat {\mathbf {A}}_{ij}\textbf {V}_{j}\) can be determined without having to compute the matrix multiplication. Algorithm 1 can also be adapted, and this is particularly straight forward when the absorbing state is fixed by the automorphism group.

Applications to classes of model

So far, we have presented two exact methods to analyse network dynamics. In the first case, if the model satisfies (11) then network automorphisms can be used to reduce the state-space dimension via lumping. In the second case, we have shown that the infinitesimal generator of an SVT process has the structure of a QBD, and hence efficient algorithms exist to compute summary statistics of interest. While the second case is relatively easy to check for a given model, in the first case it is generally not practical to check that all elements of the automorphism group and pairs of states satisfy (11). Thus in this section we describe some broad classes of models that satisfy (11), so that one only needs to check whether the model of interest belongs to one of these classes. This builds on recent work (Ward and Evans 2019).

Homogeneous SVT processes

Recall that in an SVT process only one vertex, the transition vertex, changes vertex-states during a single transition, thus it is only necessary to specify the transition rates between pairs of vertex-states to define an SVT process. While this rate may depend on various quantities (such as the number of neighbours of the transition vertex in each vertex-state), when the dependence of the transition rate function on these quantities is independent of the particular vertex involved, we call the model a homogeneous single-vertex transition (HomSVT) model. Consequently, the dynamics of the model can be represented graphically by a directed network on the vertex-states, where edges represent possible transitions. We call this a network dynamics diagram and an example for the SIS model of epidemic spreading is illustrated in Fig. 1. Table 1 is a list of HomSVT processes and Fig. 2 illustrates the corresponding network dynamics diagrams.

Fig. 1
figure 1

SIS model. A graphical representation of the SIS model of epidemic spreading, indicating that a susceptible (S) vertex becomes infected (I) at a rate βmI, where mI is the number of infected neighbours, and an infected vertex becomes susceptible at a rate γ

Fig. 2
figure 2

HomSVT network diagrams. Network diagrams for HomSVT processes listed in Table 1. Vertex-states are indicated by labels within nodes of the networks, possible transitions by directed edges and the transition rates are written alongside the corresponding edges, rate parameters are listed in Table 1. The degree of the transition vertex is denoted by k and mI is the number of neighbours of the transition vertex with vertex-state I, and similarly for other vertex-states, e.g. mC, m± etc. In (h), f(x,q)=xq+ε[1−xq−(1−x)q] (see (Castellano et al. 2009) for details). The transition in (o) and (p) have rate 1 if the inequalities are satisfied

Table 1 HomSVT processes

Note that high accuracy mean-field approximations for HomSVT models have been derived (Gleeson 2011; 2013; Fennell and Gleeson 2019) and it has also been shown that the transition rates of HomSVT models can depend on any metric that is invariant under automorphisms (Ward and Evans 2019), which is natural for typical network dynamics models (otherwise the network structure would be ignored). Thus the full network automorphism group can be used to lump state-space, i.e. G=H.

We will now illustrate our analysis on a very simple numerical example of a HomSVT model where it is possible to obtain analytical results.

Biased voter model

Inspired by the voter model of opinion dynamics, we introduce a continuous-time HomSVT which we dub the “biased voter” model, and consider its dynamics on the two-edge path network illustrated in Fig. 3, which for brevity we will refer to as simply the path network.

Fig. 3
figure 3

Path network. The two-edge path network on which we consider the Biased Voter model

Vertex states represent opinions of which there are only two, denoted by 0 or 1, and so \(\mathcal {W}=\{0,1\}\). If vertex v has degree k and opinion 0, it switches to opinion 1 at a rate αm1/k, where α≥0 is a rate parameter and m1 is the number of v’s neighbours that have opinion 1. Similarly, if v has opinion 1 then it switches to opinion 0 at a rate βm0/k, where β≥0 is a rate parameter and m0 is the number of v’s neighbours that have opinion 0. If α=β, then this model corresponds to the standard voter model in continuous time, however if α>β then there is a bias towards opinion 1. We can set α=1 and consider αβ without loss of generality, since we can always relabel the opinions, relabel and re-scale the rates, and re-scale time.

The opinion dynamics are then described in terms of a continuous-time Markov chain \({\mathcal {X}}=\{\textbf {X}(t):\ t\geq 0\}\), where X(t) represents the opinions of the population at time t and can take values among states in the state-space \({\mathcal {S}}=\{000,001,010,100,011,101,110,111\}\), where the vertex-states of vertices 1 to 3 are indicated from left to right. The consensus states 000 and 111 are absorbing and with the ordering of states

$$\underbrace{000}_{{\mathcal{S}}^{(0)}}\prec \underbrace{001 \prec 100 \prec 010}_{{\mathcal{S}}^{(1)}} \prec \underbrace{011 \prec 110 \prec 101}_{{\mathcal{S}}^{(2)}} \prec \underbrace{111}_{{\mathcal{S}}^{(3)}}, $$

the biased voter model dynamics on the path network are described in terms of the infinitesimal generator

$$\begin{array}{*{20}l} \textbf{Q} = \left(\begin{array}{ccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \beta & -(\beta+\frac{\alpha}{2}) & 0 & 0 & \frac{\alpha}{2} & 0 & 0 & 0\\ \beta & 0 & -(\beta+\frac{\alpha}{2}) & 0 & 0 & \frac{\alpha}{2} & 0 & 0\\ \beta & 0 & 0 & -(\beta+2\alpha) & \alpha & \alpha & 0 & 0\\ 0 & \frac{\beta}{2} & 0 & 0 & -(\frac{\beta}{2}+\alpha) & 0 & 0 & \alpha\\ 0 & 0 & \frac{\beta}{2} & 0 & 0 & -(\frac{\beta}{2}+\alpha) & 0 & \alpha\\ 0 & \beta & \beta & 0 & 0 & 0 & -(2\beta+\alpha) & \alpha\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right). \end{array} $$

However, from Fig. 3 we observe that swapping vertices 1 and 3 leaves the graph unchanged. This is the only non-trivial symmetry in the graph and thus the automorphism group is the cyclic group of order 2. Swapping vertices 1 and 3 in the state 001 results in 100, thus these states can be lumped together. Similarly, states 011 and 110 can be lumped together and all other states are fixed by this swap. Consequently, the lumping partition can be written as \({\mathcal {L}}=\{L_{1},L_{2},\dots,L_{6}\}\), where L1={000}, L2={001,100}, L3={010}, L4={011,110}, L5={101} and L6={111}. For this example, the collector and distributor matrices from the “Graph-automorphism lumping” section are

$$\begin{array}{@{}rcl@{}} \textbf{V} &=& \left(\begin{array}{cccc} \textbf{V}_{0} & \textbf{0} & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{V}_{1} & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{0} & \textbf{V}_{2} & \textbf{0}\\ \textbf{0} & \textbf{0} & \textbf{0} & \textbf{V}_{3} \end{array}\right) \ = \ \left(\begin{array}{c|cc|cc|c} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)\quad\text{and}\\ \textbf{W} &=& \left(\begin{array}{cccc} \textbf{W}_{0} & \textbf{0} & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{W}_{1} & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{0} & \textbf{W}_{2} & \textbf{0}\\ \textbf{0} & \textbf{0} & \textbf{0} & \textbf{W}_{3} \end{array}\right) \ = \ \left(\begin{array}{c|ccc|ccc|c} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right) \end{array} $$

respectively, thus the lumped infinitesimal generator is

$$\begin{array}{*{20}l} \hat{\mathbf{Q}} &=\textbf{W}\textbf{Q}\textbf{V}\\ &=\left(\begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0\\ \beta & -(\beta+\frac{\alpha}{2}) & 0 & \frac{\alpha}{2} & 0 & 0\\ \beta & 0 & -(\beta+2\alpha) & 2\alpha & 0 & 0\\ 0 & \frac{\beta}{2} & 0 & -(\frac{\beta}{2}+\alpha)& 0 & \alpha\\ 0 & 2\beta & 0 & 0 & -(2\beta+\alpha) & \alpha\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right). \end{array} $$

We now illustrate for the original and lumped CTMCs how to compute probabilities of absorption into the states 000 and 111, given by

$$\begin{array}{@{}rcl@{}} p_{S}^{(000)} &=& \lim\limits_{t\rightarrow+\infty}\mathbb{P}(\textbf{X}(t)=000\ |\ \textbf{X}(0)=S),\quad S\in{\mathcal{S}},\quad\text{and}\\ p_{S}^{(111)} &=& \lim\limits_{t\rightarrow+\infty}\mathbb{P}(\textbf{X}(t)=111\ |\ \textbf{X}(0)=S) \ = \ 1-p_{S}^{(000)},\quad S\in{\mathcal{S}}. \end{array} $$

In order to compute these probabilities, (4) leads to

$$\begin{array}{@{}rcl@{}} p_{001}^{(000)}\left(\beta+\frac{\alpha}{2}\right) &=& \beta p_{000}^{(000)}+\frac{\alpha}{2} p_{011}^{(000)},\\ p_{100}^{(000)}\left(\beta+\frac{\alpha}{2}\right) &=& \beta p_{000}^{(000)}+\frac{\alpha}{2} p_{110}^{(000)},\\ p_{010}^{(000)}(\beta+2\alpha) &=& \beta p_{000}^{(000)}+\alpha p_{011}^{(000)}+\alpha p_{110}^{(000)},\\ p_{011}^{(000)}\left(\frac{\beta}{2}+\alpha\right) &=& \frac{\beta}{2} p_{001}^{(000)}+\alpha p_{111}^{(000)},\\ p_{110}^{(000)}\left(\frac{\beta}{2}+\alpha\right) &=& \frac{\beta}{2} p_{100}^{(000)}+\alpha p_{111}^{(000)},\\ p_{101}^{(000)}(2\beta+\alpha) &=& \beta p_{001}^{(000)}+\beta p_{100}^{(000)}+\alpha p_{111}^{(000)}. \end{array} $$

Taking into account the boundary conditions \(p_{000}^{(000)}=1\) and \(p_{111}^{(000)}=0\), we get the system in matrix form

$$\begin{array}{@{}rcl@{}} \left(\begin{array}{c} p_{001}^{(000)}\\ p_{100}^{(000)}\\ p_{010}^{(000)}\\ p_{011}^{(000)}\\ p_{110}^{(000)}\\ p_{101}^{(000)} \end{array}\right) &=& \left(\begin{array}{ccccccc} 0 & 0 & 0 & \frac{\alpha}{\alpha+2\beta} & 0 & 0\\ 0 & 0 & 0 & 0 & \frac{\alpha}{\alpha+2\beta} & 0\\ 0 & 0 & 0 & \frac{\alpha}{2\alpha+\beta} & \frac{\alpha}{2\alpha+\beta} & 0\\ \frac{\beta}{2\alpha+\beta} & 0 & 0 & 0 & 0 & 0\\ 0 & \frac{\beta}{2\alpha+\beta} & 0 & 0 & 0 & 0\\ \frac{\beta}{\alpha+2\beta} & \frac{\beta}{\alpha+2\beta} & 0 & 0 & 0 & 0 \end{array}\right)\left(\begin{array}{c} p_{001}^{(000)}\\ p_{100}^{(000)}\\ p_{010}^{(000)}\\ p_{011}^{(000)}\\ p_{110}^{(000)}\\ p_{101}^{(000)} \end{array}\right)+\left(\begin{array}{c} \frac{2\beta}{\alpha+2\beta}\\ \frac{2\beta}{\alpha+2\beta}\\ \frac{\beta}{2\alpha+\beta}\\ 0\\ 0\\ 0 \end{array}\right)\\ &=& \left(\begin{array}{cc} \textbf{0} & \textbf{A}_{12}\\ \textbf{A}_{21} & \textbf{0} \end{array}\right)\left(\begin{array}{c} \textbf{p}_{1}\\ \textbf{p}_{2} \end{array}\right)+\left(\begin{array}{c} \textbf{b}_{1}\\ \textbf{0} \end{array}\right), \end{array} $$

which inherits the structure of Q for the transient (i.e. non-absorbing) states. We note that in this case, because 000 is a single state within its level, b1=A10. The lumped system is

$$\begin{array}{@{}rcl@{}} \left(\begin{array}{c} p_{2}^{(000)}\\ p_{3}^{(000)}\\ p_{4}^{(000)}\\ p_{5}^{(000)} \end{array}\right) &=& \left(\begin{array}{ccccc} 0 & 0 & \frac{\alpha}{\alpha+2\beta} & 0\\ 0 & 0 & \frac{2\alpha}{2\alpha+\beta} & 0\\ \frac{\beta}{2\alpha+\beta} & 0 & 0 & 0\\ \frac{2\beta}{\alpha+2\beta} & 0 & 0 & 0 \end{array}\right)\left(\begin{array}{c} p_{2}^{(000)}\\ p_{3}^{(000)}\\ p_{4}^{(000)}\\ p_{5}^{(000)} \end{array}\right)+\left(\begin{array}{c} \frac{2\beta}{\alpha+2\beta}\\ \frac{\beta}{2\alpha+\beta}\\ 0\\ 0 \end{array}\right)\\ &=& \left(\begin{array}{cc} \textbf{0} & \textbf{W}_{1}\textbf{A}_{12}\textbf{V}_{2}\\ \textbf{W}_{2}\textbf{A}_{21}\textbf{V}_{1} & \textbf{0} \end{array}\right)\left(\begin{array}{c} \tilde{\mathbf{p}}_{1}\\ \tilde{\mathbf{p}}_{2} \end{array}\right)+\left(\begin{array}{c} \textbf{W}_{1}\textbf{A}_{10}\textbf{V}_{0}\\ \textbf{0} \end{array}\right), \end{array} $$

where \(\tilde {\mathbf {p}}_{i}\) is just the lumped version of pi, and \(p_{i}^{(000)}={\lim }_{t\rightarrow \infty }\mathbb {P}(\textbf {X}(t)=000\ | \ \textbf {X}(0)\in L_{i})\). To apply Algorithm 1, we note that since 000 is the only state in the zeroth level, H0=1 and J0=0. Then if \(\tilde {\mathbf {A}}_{12}=\textbf {W}_{1}\textbf {A}_{12}\textbf {V}_{2}\) and \(\tilde {\mathbf {A}}_{21}=\textbf {W}_{2}\textbf {A}_{21}\textbf {V}_{1}\), we also have

$$ \textbf{H}_{1}=\textbf{I}_{2},\quad\textbf{J}_{1}=\textbf{W}_{1}\textbf{A}_{10}\textbf{V}_{0},\quad \textbf{H}_{2}=\mathbf{I}_{2}-\tilde{\mathbf{A}}_{21}\tilde{\mathbf{A}}_{12},\quad\text{and}\quad \textbf{J}_{2}=\tilde{\mathbf{A}_{21}}\textbf{J}_{1}. $$

Since H2 is a two-by-two matrix, we can easily find \(\tilde {\mathbf {p}}_{2}=\textbf {H}_{2}^{-1}\textbf {J}_{2}\) and \(\tilde {\mathbf {p}}_{1}=\tilde {\mathbf {A}}_{12}\tilde {\mathbf {p}}_{2}+\textbf {J}_{1}\), resulting in

$$\begin{array}{@{}rcl@{}} p_{001}^{(000)} \ = \ p_{100}^{(000)} \ = \ p_{2}^{(000)} &=&\frac{(2\alpha+\beta)\beta}{(\alpha+\beta)^{2}} \ = \ 1-p_{2}^{(111)},\\ p_{010}^{(000)} \ = \ p_{3}^{(000)} &=&\frac{(\alpha^{2}+4\alpha\beta+\beta^{2})\beta}{(2\alpha+\beta)(\alpha+\beta)^{2}}\ = \ 1-p_{3}^{(111)},\\ p_{011}^{(000)} \ = \ p_{110}^{(000)} \ = \ p_{4}^{(000)} &=&\frac{\beta^{2}}{(\alpha+\beta)^{2}} \ = \ 1-p_{4}^{(111)},\\ p_{101}^{(000)} \ = \ p_{5}^{(000)} &=&\frac{2(2\alpha+\beta)\beta^{2}}{(\alpha+2\beta)(\alpha+\beta)^{2}} \ = \ 1-p_{5}^{(111)}. \end{array} $$

In Fig. 4, we illustrate the lumped transition rates and the probabilities of absorption for this example for the case where α=1 and β=0.5. We use a single state in each orbit to represent that orbit (e.g. the state 100 represents the lumping cell L2), and the number of states within each orbit is indicated by the size of the grey circles. The lumped transition rates are illustrated by arrows and the size of the arrow head is scaled according to the corresponding transition rate. The coloured circles indicate the probabilities of absorption; those above each state correspond to absorption into the state 111 and those below correspond to absorption into the state 000. The colour and size of the circles corresponds to the absorption probability according to the colour bar. From Fig. 4 it is clear that states 010 and 101 are not accessible from any other states. Moreover, the bias means that it is more likely that the absorbing state 111 is reached from 010 than the absorbing state 000 is reached from 101.

Fig. 4
figure 4

Path network transitions. Lumped transition rates and probabilities of absorption for the Biased Voter model on the two-edge path network with α=1 and β=0.5. Orbits are indicated by a single representative state, e.g. 000. The size of the grey circles indicates the size of the corresponding orbit, arrows illustrate non-zero transition rates and coloured circles correspond to absorption probabilities into 000 (below the orbit representative) and 111 (above the orbit representative)

Heterogeneous SVT processes

It is also possible to use network automorphisms to lump the state-space of heterogeneous SVT (HetSVT) models, if the heterogeneity respects (at least some of) the network symmetries. As an example, we consider an SIS epidemic spreading model inspired by a peripatetic health-care worker network (Temime et al. 2009; López-García and Kypraios 2018), illustrated in Fig. 5. This network represents the spread of a hospital-acquired infection in an intensive care unit where four patients (nodes 4, 5, 6 and 7) are treated by different health-care workers (nodes 1, 2 and 3). The different types of health care lead to different infection rates β1, β2 and β3. We consider an SIS epidemic model where all nodes recover at rate γ, and where the rate at which each node becomes infected (\(S\rightarrow I\)) is the sum of all rates βi over edges with infected neighbours. In particular, for the case illustrated in Fig. 5, if one considers the notation β14=β41=β1, β24=β42=β2, β25=β52=β3, etc, and βij=0 if vertices i and j are not connected, then each susceptible vertex i becomes infected at a rate \(\sum _{j}\beta _{ji}S(j)\), where S(j)=0 if j is susceptible, and S(j)=1 if j is infective.

Fig. 5
figure 5

Peripatetic health-care worker network with heterogeneous infection rates

The size of the state-space for this example is 27=128, so we will not write out the full state-space or the infinitesimal generator. As in the Biased Voter model example, we will denote states by 0000000, 0000001 etc., where the vertex-states of vertices 1 to 7 are indicated from left to right. The state 0000000 is the only absorbing state.

We can represent this type of HetSVT model using a weighted network, where the positive weights correspond to the heterogeneous infections rates. The automorphism group of this weighted network preserves weights as well as edges and consequently satisfies (11), so we can use these symmetries to lump state-space. For the example illustrated in Fig. 5, the automorphism group is generated by a single permutation, which in cycle notation is (2 3)(4 6)(5 7), and is isomorphic to the cyclic group of order 2. Note that the automorphism group of the weighted graph is a sub-group of the automorphism group of the unweighted graph. However, we will assume the particular case where β2=β3. In this case, the generators of the automorphism group of the weighted network are (4 5) and (2 3)(4 6)(5 7), and this group is isomorphic to the dihedral group D8, which is also the automorphism group of the unweighted graph. The corresponding lumped state-space consists of 42 orbits.

In Fig. 6 we illustrate the structure of the infinitesimal generator for the case γ=1.0, β1=0.12, β2=β3=0.35.

Fig. 6
figure 6

Lumped transition rates for the SIS model on the peripatetic health-care worker network. Orbits are indicated by a single representative state that is formatted in correspondence with Figure 5. The size of the grey circles indicates the size of the corresponding orbit, arrows illustrate non-zero transition rates and the size of the arrow head scales with the corresponding transition rate

We choose a single state to represent each orbit in the lumping partition and these are illustrated in correspondence with Fig. 5, so that the top vertex is vertex 1, with 2 below on the left, 3 below on the right and so on. The size of each orbit is indicated by the grey circle behind the orbit representative, and the sizes of these are either 1, 2, 4 or 8. The possible transitions between orbits are indicated by arrows between the orbit representatives. As in Fig. 4, the sizes of the arrow heads are scaled according to the corresponding transition rate. Note that for each level between 1 and 4, there is at least one state that cannot be reached by an infection, e.g. the state 0011100 in level 3.

In Fig. 7 we illustrate the mean times to absorption from each orbit in the lumping partition for the cases β1=0.12, 1.0 and 2.0, each with γ=1.0 and β2=β3=0.35.

Fig. 7
figure 7

Lumped mean absorption times for the SIS model of epidemics on the peripatetic health-care worker network illustrated in Fig. 5, with recovery rate γ=1 and infection rates β1=0.12, 1 and 2, and β2=β3=0.35. Initial orbits S are indicated by orbit representatives on the horizontal axis and the mean times to absorption mS is plotted on the vertical axis

The initial orbits are indicated on the horizontal axis by the corresponding orbit representatives and the time to absorption is plotted on the vertical axis. The orbits are grouped together by levels and these are indicated by the shading. Note that intuitively the largest absorption times are from the state 1111111, but the absorption time increases dramatically from β=1 to β=2. While the absorption times generally increase with the level, there are several cases where the absorption time is higher in a lower level state than some of the higher level states, indicating the importance of network effects.

Discussion

In this paper we have proposed a unified framework to represent network dynamics in terms of continuous-time Markov chains. We have shown how a sub-set of these models (SVT processes) lead to the analysis of QBD processes in the theory of CTMCs, for which there are a number of algorithmic techniques already available in the literature. Instead of focusing on studying the master equation, our interest has been the computation of a number of summary statistics related to absorption (namely, probabilities of absorption when two absorbing states are present in the system, and time until absorption when there is just one single absorbing state). We have extended the results in (Kiss et al. 2017; Simon et al. 2011) to a broader class of models in which network symmetries can be used to reduce the size of state-space via lumping, we have also illustrated how the lumping approach can be represented in a matrix-oriented fashion, and the interplay between this and the QBD structure of SVT processes.

We believe that our results should generalise to broader classes of network, for example heterogeneous SVT processes could be re-framed as dynamics on weighted networks. Another possible future direction for this work is to consider how the lumping criterion that we have presented, (11), could be applied to more general types of model symmetry, including cases where there isn’t an underlying network. For example, there is potential to exploit lumping based on model symmetry in other disciplines including queuing theory, cellular and stochastic automata (Buchholz and Kemper 2004), Markovian agent-based models (Banisch et al. 2013; Banisch and Lima 2015; KhudaBukhsh et al. 2019), hierarchical Markov models (Buchholz 1995), continuous-time Bayesian networks (Nodelman et al. 2002) and more general CTMC models.

The interpretation of the summary statistics related to absorption depend on the process under consideration, cf. consensus in the voter model with the end of an infection in the SIS epidemic model. However, it is clear that our approach generalises to other summary statistics amenable to analysis by means of first-step arguments. This includes quasi-stationary distributions (Darroch and Seneta 1967; Gómez-Corral, A and López-García 2012), the probability distribution of the number of events (e.g., infections) during a particular time interval (e.g., until end or detection of the outbreak) (Economou et al. 2015; Gómez-Corral and López-García 2017; López-García et al. 2019), the probability of a particular node (i.e., individual) suffering a particular state (e.g., an infection) during a particular time interval (e.g., during an outbreak) (López-García 2016), the area under the curve of infectives for epidemic processes (Ball and Clancy 1995), or stationary (i.e., steady-state) distributions for non-absorbing processes (Economou et al. 2015).

Exact analyses of Markov chain dynamics on networks are rare since authors typically resort to mean-field or related approximations. These can do surprisingly well, but it is unclear what model and network features ensure this (Gleeson et al. 2012). One of the simplest mean-field approximations corresponds to the ‘well-mixed’ case, whose master equation can be viewed as the network automorphism lumping of the fully connected network (Kiss et al. 2017; Simon et al. 2011). It would be interesting to identify other graphs where symmetry gives rise to significant lumping and better dynamical approximations. For example, it has been shown that the number of lumped states is linear in the number of vertices for the complete and star graphs, and is cubic in the number of vertices for the household graph (Simon et al. 2011). Other graphs that have high symmetry, and hence a (relatively) small number of lumped states include bi- and multi-partite graphs, regular trees, graphs with multiple isomorphic disconnected components and graphs with many leaves. However, our experience is that identifying such graphs can be extremely difficult as they necessarily have to have a large amount of symmetry and this makes computing the lumped state-space difficult. An alternative approach might be to consider how lumping could be used to quantify the accuracy of mean-field approximations, for example by considering how close certain partitions of states are to exact lumping partitions, or approximate lumping based on structurally similar sets of vertices, rather than vertices that can be permuted via automorphisms, e.g. using local symmetries (KhudaBukhsh et al. 2019). There is also scope to develop more accurate mean-field approximations, similar to those in (Gleeson 2011; 2013; Fennell and Gleeson 2019), for the type of unified framework proposed here.

In summary, we have presented a general framework of dynamics on networks. One of the main purposes of this general model is to show how a wide range of dynamical processes on networks can be formulated as Markov chains. Our hope is that this will help to formalise the analytical treatment of more complex and more realistic models, ultimately improving our understanding of the phenomena that they represent.

References

  • Axelrod, R (1997) The dissemination of culture: A model with local convergence and global polarization. Confl Resolut 41(2):203–226.

    Article  Google Scholar 

  • Ball, F, Clancy D (1995) The final outcome of an epidemic model with several different types of infective in a large population. J Appl Prob 32(3):579–590.

    Article  MathSciNet  MATH  Google Scholar 

  • Banisch, S, Lima R, Araújo T (2013) Aggregation and emergence in agent-based models: A Markov chain approach In: Proceedings of the European Conference on Complex Systems 2012, 3–7.. Springer, Cham.

    Chapter  Google Scholar 

  • Banisch, S, Lima R (2015) Markov chain aggregation for simple agent-based models on symmetric networks: The voter model. Adv Compl Syst 18(03n04):1550011.

    Article  MathSciNet  Google Scholar 

  • Barrat, A, Barthelemy M, Vespignani A (2008) Dynamical Processes on Complex Networks. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511791383.

  • Baronchelli, A, Felici M, Loreto V, Caglioti E, Steels L (2006) Sharp transition towards shared vocabularies in multi-agent systems. J Stat Mech 2006(06):06014.

    Article  MATH  Google Scholar 

  • Bass, FM (1969) A new product growth for model consumer durables. Manag Sci 15(5):215–227.

    Article  MATH  Google Scholar 

  • Bean, N, Bright L, Latouche G, Pearce C, Pollett P, Taylor PG (1997) The quasi-stationary behavior of quasi-birth-and-death processes. Ann Appl Prob 7(1):134–155.

    Article  MathSciNet  MATH  Google Scholar 

  • Bonabeau, E, Theraulaz G, Deneubourg J-L (1995) Phase diagram of a model of self-organizing hierarchies. Phys A 217(3-4):373–392.

    Article  MATH  Google Scholar 

  • Buchholz, P (1994) Exact and ordinary lumpability in finite markov chains. J Appl Prob 31(1):59–75.

    Article  MathSciNet  MATH  Google Scholar 

  • Buchholz, P (1995) Hierarchical markovian models: symmetries and reduction. Perform Eval 22(1):93–110.

    Article  Google Scholar 

  • Buchholz, P, Kemper P (2004) Kronecker based matrix representations for large markov models In: Validation of Stochastic Systems, 256–295.. Springer, Berlin.

    Chapter  Google Scholar 

  • Cao, J, Wang Y, Alofi A, Al-Mazrooei A, Elaiw A (2014) Global stability of an epidemic model with carrier state in heterogeneous networks. IMA J Appl Math 80(4):1025–1048.

    Article  MathSciNet  MATH  Google Scholar 

  • Castellano, C, Marsili M, Vespignani A (2000) Nonequilibrium phase transition in a model for social influence. Phys Rev Lett 85(16):3536.

    Article  Google Scholar 

  • Castellano, C, Vilone D, Vespignani A (2003) Incomplete ordering of the voter model on small-world networks. Europhys Lett 63(1):153.

    Article  Google Scholar 

  • Castellano, C, Muñoz MA, Pastor-Satorras R (2009) Nonlinear q-voter model. Phys Rev E 80(4):041129.

    Article  Google Scholar 

  • Castellano, C, Fortunato S, Loreto V (2009) Statistical physics of social dynamics. Rev Modern Phys 81(2):591.

    Article  Google Scholar 

  • Castelló, X, Eguíluz VM, San Miguel M (2006) Ordering dynamics with two non-excluding options: bilingualism in language competition. New J Phys 8(12):308.

    Article  Google Scholar 

  • Castro, M, López-García M, Lythe G, Molina-París C (2018) First passage events in biological systems with non-exponential inter-event times. Sci Rep 8(1):15054.

    Article  Google Scholar 

  • Daley, DJ, Kendall DG (1964) Epidemics and rumours. Nature 204(4963):1118.

    Article  Google Scholar 

  • Darroch, JN, Seneta E (1967) On quasi-stationary distributions in absorbing continuous-time finite Markov chains. J Appl Prob 4(1):192–6.

    Article  MathSciNet  MATH  Google Scholar 

  • Derisavi, S, Hermanns H, Sanders WH (2003) Optimal state-space lumping in markov chains. Inf Process Lett 87(6):309–315.

    Article  MathSciNet  MATH  Google Scholar 

  • Deffuant, G, Neau D, Amblard F, Weisbuch G (2000) Mixing beliefs among interacting agents. Adv Compl Sys 3(01n04):87–98.

    Article  Google Scholar 

  • Durrett, R (2007) Random Graph Dynamics, Vol. 200. Cambridge University Press, Cambridge.

    MATH  Google Scholar 

  • Economou, A, Gómez-Corral A, López-García M (2015) A stochastic SIS epidemic model with heterogeneous contacts. Physica A: Stat Mech Appl 421:78–97.

    Article  MathSciNet  MATH  Google Scholar 

  • Fennell, PG, Gleeson JP (2019) Multistate dynamical processes on networks: analysis through degree-based approximation frameworks. SIAM Rev 61(1):92–118.

    Article  MathSciNet  MATH  Google Scholar 

  • Fortunato, S (2004) The Krause–Hegselmann consensus model with discrete opinions. Int J Modern Phys C 15(07):1021–1029.

    Article  MATH  Google Scholar 

  • Fraleigh, JB (2003) A First Course in Abstract Algebra. 7th. Pearson Education, London.

    MATH  Google Scholar 

  • Galam, S (2002) Minority opinion spreading in random geometry. Eur Phys J B 25(4):403–406.

    Google Scholar 

  • Glauber, RJ (1963) Time-dependent statistics of the Ising model. J Math Phys 4(2):294–307.

    Article  MathSciNet  MATH  Google Scholar 

  • Gleeson, JP (2011) High-accuracy approximation of binary-state dynamics on networks. Phys Rev Lett 107(6):68701.

    Article  Google Scholar 

  • Gleeson, JP, Melnik S, Ward JA, Porter MA, Mucha PJ (2012) Accuracy of mean-field theory for dynamics on real-world networks. Phys Rev E 85(2):026106.

    Article  Google Scholar 

  • Gleeson, JP, Hurd T, Melnik S, Hackett A (2012) Systemic risk in banking networks without Monte Carlo simulation In: Advances in Network Analysis and Its Applications. Mathematics in Industry, 27–56.. Springer, Berlin Heidelberg.

    Chapter  Google Scholar 

  • Gleeson, JP (2013) Binary-state dynamics on complex networks: Pair approximation and beyond. Phys Rev X 3(2):021004.

    Google Scholar 

  • Gleeson, JP, Ward JA, O’Sullivan KP, Lee WT (2014) Competition-induced criticality in a model of meme popularity. Phys Rev Lett 112(4):048701.

    Article  Google Scholar 

  • Godsil, C, Royle GF (2013) Algebraic Graph Theory, Vol. 207. Springer, New York.

    MATH  Google Scholar 

  • Goldenberg, J, Libai B, Muller E (2001) Talk of the network: A complex systems look at the underlying process of word-of-mouth. Market Lett 12(3):211–223.

    Article  Google Scholar 

  • Gómez-Corral, A, López-García M (2012) Extinction times and size of the surviving species in a two-species competition process. J Math Biol 64(1-2):255–289.

    Article  MathSciNet  MATH  Google Scholar 

  • Gómez-Corral, A, López-García M (2014) Maximum queue lengths during a fixed time interval in the MM/c retrial queue. Appl Math Comput 235:124–136.

    MathSciNet  MATH  Google Scholar 

  • Gómez-Corral, A, López-García M (2015) Lifetime and reproduction of a marked individual in a two-species competition process. Appl Math Comput 264:223–245.

    MathSciNet  MATH  Google Scholar 

  • Gómez-Corral, A, López-García M (2017) On SIR epidemic models with generally distributed infectious periods: Number of secondary cases and probability of infection. International Journal of Biomathematics 10(02):1750024.

    Article  MathSciNet  MATH  Google Scholar 

  • Haldane, AG, May RM (2011) Systemic risk in banking ecosystems. Nature 469(7330):351.

    Article  Google Scholar 

  • He, Q-M (2014) Fundamentals of Matrix-analytic Methods, Vol. 365. Springer, New York Heidelberg Dordrecht London.

    Book  MATH  Google Scholar 

  • Hegselmann, R, Krause U (2002) Opinion dynamics and bounded confidence models, analysis, and simulation. J Artif Soc Soc Simul 5(3).

  • Hethcote, HW (2000) The mathematics of infectious diseases. SIAM Rev 42(4):599–653.

    Article  MathSciNet  MATH  Google Scholar 

  • Hill, AL, Rand DG, Nowak MA, Christakis NA (2010) Infectious disease modeling of social contagion in networks. PLOS Comput Biol 6(11):1000968.

    Article  MathSciNet  Google Scholar 

  • Holme, P (2017) Three faces of node importance in network epidemiology: Exact results for small graphs. Phys Rev E 96(6):062305.

    Article  Google Scholar 

  • Holme, P, Tupikina L (2018) Epidemic extinction in networks: insights from the 12,110 smallest graphs. New J Phys 20(11):113042.

    Article  Google Scholar 

  • Keeling, MJ, Ross JV (2007) On methods for studying stochastic disease dynamics. J Royal Soc Inter 5(19):171–181.

    Article  Google Scholar 

  • Keeling, MJ, Ross JV (2009) Efficient methods for studying stochastic disease and population dynamics. Theoret Popul Biol 75(2-3):133–141.

    Article  MATH  Google Scholar 

  • Kemeny, JG, Snell JL (1960) Finite Markov Chains, Vol. 356. Van Nostrand, Princeton.

    MATH  Google Scholar 

  • Kempe, D, Kleinberg J, Tardos É (2003) Maximizing the spread of influence through a social network In: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 137–146.. ACM, New York.

    Chapter  Google Scholar 

  • KhudaBukhsh, WR, Auddy A, Disser Y, Koeppl H (2019) Approximate lumpability for Markovian agent-based models using local symmetries. J Appl Prob 56(3):647–71.

    Article  MathSciNet  MATH  Google Scholar 

  • Kirman, A (1993) Ants, rationality, and recruitment. Quart J Econ 108(1):137–56.

    Article  Google Scholar 

  • Kiss, IZ, Röst G, Vizi Z (2015) Generalization of pairwise models to non-Markovian epidemics on networks. Phys Rev Lett 115(7):078701.

    Article  Google Scholar 

  • Kiss, IZ, Miller JC, Simon PL (2017) Mathematics of Epidemics on Networks, Vol. 46. Springer, Cham.

    Book  MATH  Google Scholar 

  • Kulkarni, VG (2016) Modeling and Analysis of Stochastic Systems. Chapman and Hall/CRC, New York.

    Google Scholar 

  • Latouche, G, Ramaswami V (1999) Introduction to Matrix Analytic Methods in Stochastic Modeling. SIAM, Philadelphia.

    Book  MATH  Google Scholar 

  • Li, C, van de Bovenkamp R, Van Mieghem P (2012) Susceptible-infected-susceptible model: A comparison of N-intertwined and heterogeneous mean-field approximations. Phys Rev E 86(2):026116.

    Article  Google Scholar 

  • López-García, M (2016) Stochastic descriptors in an SIR epidemic model for heterogeneous individuals in small networks. Math Biosci 271:42–61.

    Article  MathSciNet  MATH  Google Scholar 

  • López-García, M, Kypraios T (2018) A unified stochastic modelling framework for the spread of nosocomial infections. J Royal Soc Interface 15(143):20180060.

    Article  Google Scholar 

  • López-García, M, King M-F, Noakes CJ (2019) A multicompartment SIS stochastic model with zonal ventilation for the spread of nosocomial infections: Detection, outbreak management, and infection control. Risk Anal 39(8):1825–42. https://doi.org/10.1111/risa.13300.

  • Mellor, A, Mobilia M, Redner S, Rucklidge AM, Ward JA (2015) Influence of Luddism on innovation diffusion. Phys Rev E 92(1):012806.

    Article  Google Scholar 

  • Melnik, S, Ward JA, Gleeson JP, Porter MA (2013) Multi-stage complex contagions. Chaos 23(1):013124.

    Article  MathSciNet  Google Scholar 

  • Motter, AE, Lai Y-C (2002) Cascade-based attacks on complex networks. Phys Rev E 66(6):065102.

    Article  Google Scholar 

  • Nakao, H, Mikhailov AS (2010) Turing patterns in network-organized activator–inhibitor systems. Nature Phys 6(7):544–50.

    Article  Google Scholar 

  • Newman, ME (2003) The structure and function of complex networks. SIAM Rev 45(2):167–256.

    Article  MathSciNet  Google Scholar 

  • Newman, ME (2010) Networks. Oxford University Press, Oxford.

    Book  MATH  Google Scholar 

  • Nodelman, U, Shelton CR, Koller D (2002) Continuous time bayesian networks In: Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, 378–387.. Morgan Kaufmann Publishers Inc., San Francisco.

    Google Scholar 

  • Pastor-Satorras, R, Vespignani A (2001) Epidemic spreading in scale-free networks. Phys Rev Lett 86(14):3200.

    Article  Google Scholar 

  • Pastor-Satorras, R, Castellano C, Van Mieghem P, Vespignani A (2015) Epidemic processes in complex networks. Rev Modern Phys 87(3):925.

    Article  MathSciNet  Google Scholar 

  • Porter, MA, Gleeson JP (2016) Dynamical systems on networks. Front Appl Dynamic Syst: Rev Tutor 4[https://link.springer.com/book/10.1007%2F978-3-319-26641-1#about;].

  • Rodrigues, FA, Peron TKD, Ji P, Kurths J (2016) The kuramoto model in complex networks. Phys Rep 610:1–98.

    Article  MathSciNet  MATH  Google Scholar 

  • Schelling, TC (1969) Models of segregation. Am Econ Rev 59(2):488–493.

    Google Scholar 

  • Schelling, TC (1971) Dynamic models of segregation. J Math Sociol 1(2):143–186.

    Article  MATH  Google Scholar 

  • Simon, PL, Taylor M, Kiss IZ (2011) Exact epidemic models on graphs using graph-automorphism driven lumping. J Math Biol 62(4):479–508.

    Article  MathSciNet  MATH  Google Scholar 

  • Sood, V, Redner S (2005) Voter model on heterogeneous graphs. Phys Rev Lett 94(17):178701.

    Article  Google Scholar 

  • Stoll, G, Viara E, Barillot E, Calzone L (2012) Continuous time boolean modeling for biological signaling: application of Gillespie algorithm. BMC Syst Biol 6(1):116.

    Article  Google Scholar 

  • Sznajd-Weron, K, Sznajd J (2000) Opinion evolution in closed community. Int J Modern Phys C 11(06):1157–1165.

    Article  MATH  Google Scholar 

  • Temime, L, Opatowski L, Pannet Y, Brun-Buisson C, Boëlle PY, Guillemot D (2009) Peripatetic health-care workers as potential superspreaders. Proc Nat Acad Sci 106(43):18420–18425.

    Article  Google Scholar 

  • Valmari, A, Franceschinis G (2010) Simple \(O(m \log n)\) time markov chain lumping In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, 38–52.. Springer, Berlin.

    Google Scholar 

  • Van Mieghem, P, Omic J, Kooij R (2009) Virus spread in networks. IEEE/ACM Trans Netw (TON) 17(1):1–14.

    Article  Google Scholar 

  • Van Mieghem, P (2011) The N-intertwined SIS epidemic network model. Computing 93(2-4):147–169.

    Article  MathSciNet  MATH  Google Scholar 

  • Vazquez, F, Krapivsky PL, Redner S (2003) Constrained opinion dynamics: Freezing and slow evolution. J Phys A 36(3):61.

    Article  MathSciNet  MATH  Google Scholar 

  • Ward, JA, Grindrod P (2014) Aperiodic dynamics in a deterministic adaptive network model of attitude formation in social groups. Phys D: Nonlinear Phenomena 282:27–33.

    Article  MathSciNet  MATH  Google Scholar 

  • Ward, JA, Evans J (2019) A general model of dynamics on networks with graph automorphism lumping. In: Aiello LM, Cherifi C, Cherifi H, Lambiotte R, Lió P, Rocha LM (eds)Complex Networks and Their Applications VII, 445–456.. Springer, Cham.

    Chapter  Google Scholar 

  • Watts, DJ (2002) A simple model of global cascades on random networks. Proc Nat Acad Sci 99(9):5766–5771.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Martín López-García acknowledges support from the Medical Research Council (UK) through the Skills Development Fellowship, MR/N014855/1, and from the Ministry of Science, Innovation and Universities (Government of Spain), through the project PGC2018-097704-B-I00

Author information

Authors and Affiliations

Authors

Contributions

Both authors conceived the idea, reviewed the literature, and contributed to the analysis in Sections 2-4, and to writing, reading and revising the manuscript. JAW developed the code that produced Figs. 4, 6 and 7.

Corresponding author

Correspondence to Jonathan A. Ward.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ward, J.A., López-García, M. Exact analysis of summary statistics for continuous-time discrete-state Markov processes on networks using graph-automorphism lumping. Appl Netw Sci 4, 108 (2019). https://doi.org/10.1007/s41109-019-0206-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41109-019-0206-4

Keywords