= 0 has the same distribution as X and is independent of the past. There must be a renewal process of such times t k : k >= 1 (with t 0 def = 0).">= 0 has the same distribution as X and is independent of the past. There must be a renewal process of such times t k : k >= 1 (with t 0 def = 0).">
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

1 Regenerative Processes: 1.1 Examples

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

1 Regenerative Processes

Copyright c 2009 by Karl Sigman


Given a stochastic process X = {X(t) : t 0}, suppose that there exists a (proper)
random time =
1
such that {X( + t) : t 0} has the same distribution as X and is
independent of the past, C
1
= {{X(t) : 0 t < }, }. Then we say that X regenerated at
time , meaning that it has stochastically started over again, as if it was time t = 0 again,
and its future is independent of its past. In particular, X() has the same distribution as
X(0), and is independent of the regeneration time . C
1
is called the rst cycle and X
1
=
1
is the rst cycle length. But if such a
1
exists, then, since things start over again as if new,
there must be a second such time
2
>
1
yielding an identically distributed second cycle
C
2
= {{X(
1
+ t) : 0 t <
2

1
},
2

1
}. Continuing in this fashion we conclude that
there is a renewal process of such times {
k
: k 1} (with
0
def
= 0) with iid cycle lengths
X
k
=
k

k1
, k 1, and such that the cycles C
k
= {{X(
k1
+ t) : 0 t < X
k
}, X
k
} are
iid objects.
By denition, X(
k
) is independent of the time
k
: Upon regeneration the value of the
process is independent of what time it is. (This turns out to be of fundamental importance
when we later prove weak convergence results for regenerative processes.)
The above denes what is called a regenerative process. It is said to be positive recurrent if
the renewal process is so, that is, if 0 < E(X
1
) < . Otherwise it is said to be null recurrent
if E(X
1
) = .
We always assume that the paths of X are right-continuous and have left hand limits.
(Think of the forward recurrence time paths {A(t) : t 0} of a simple point process.)
1.1 Examples
1. Any recurrent continuous-time Markov chain is regenerative: Fix any state i and let
X(0) = i. Then let =
i,i
denote the rst time that the chain returns back to state i
(after rst leaving it). By the (strong) Markov property, the chain indeed starts all over
again in state i at time independent of its past. Since i is recurrent, we know that the
chain must in fact return over and over again, and we can let
k
denote the k
th
such time.
We also know that positive recurrence of the chain, by denition, means that E(
i,i
) < ,
which is thus equivalent to being a positive recurrent regenerative process.
2. Age, excess and spread for a renewal process: Given a renewal process, both {B(t) : t 0},
{A(t) : t 0} and {S(t) : t 0} are regenerative processes; they regenerate at the arrival
times {t
n
: n 1};
n
= t
n
, n 1. They are positive recurrent when the renewal process
is so.
2 Renewal reward theorem applied to regenerative processes
Because {
k
} is a renewal process and the cycles {C
k
} are iid, we can apply the renewal reward
theorem in a variety of ways to a regenerative process so as to compute various time average
quantities of interest. The general result can be stated in words as
1
the time average is equal to the expected value over a cycle divided by the expected
cycle length.
For example, we can obtain the time-average of the process itself as
lim
t
1
t
_
t
0
X(s)ds =
E(R)
E(X)
, wp1,
where X = X
1
and R = R
1
=
_
X
0
X(s)ds. The proof is derived by letting N(t) denote the
number of renewals (regenerations) by time t, and observing that
_
t
0
X(s)ds
N(t)

j=1
R
j
,
where
R
j
=
_

j

j1
X(s)ds.
So we are simply interpreting X(t) as the rate at which we earn a reward at time t and applying
the renewal reward theorem. The (R
j
, X
j
) are iid since they are constructed from iid cycles;
(R
j
, X
j
) is constructed from C
j
.
2.1 Functions of regenerative processes are regenerative with the same re-
generation times
If X is regenerative with regeneration times
k
, then it is immediate that for any (measurable)
function f = f(x), the process Y (t) = f(X(t)) denes a regenerative process with the very
same regeneration times. This allows us to obtain
lim
t
1
t
_
t
0
f(X(s))ds =
E(R)
E(X)
, wp1,
where X = X
1
and R = R
1
=
_
X
0
f(X(s))ds.
We next precisely state and prove the main result. The proof mimics quite closely the proof
of the renewal reward theorem.
Theorem 2.1 If X is a positive recurrent regenerative process, and f = f(x) is a (measurable)
function such that E
_
_
X
1
0
|f(X(s))|ds
_
< , then
lim
t
1
t
_
t
0
f(X(s))ds =
E(R)
E(X)
, wp1, (1)
and
lim
t
1
t
_
t
0
E[f(X(s))]ds =
E(R)
E(X)
, (2)
where R = R
1
=
_
X
1
0
f(X(s))ds, and X = X
1
.
2
Proof : Let N(t) = max{j :
j
t} denote the number of renewals (regenerations) by time t.
First note that if f 0, then since t
N(t)
t < t
N(t)
+ 1, by dening R
j
=
_

j1
f(X(s))ds, (1)
can be derived by using the following sandwiching bounds (upper and lower bounds),
1
t
N(t)

j=1
R
j

1
t
_
t
0
f(X(s))ds
1
t
N(t)+1

j=1
R
j
. (3)
Both these bounds converge to E(R)/E(X) yielding the result: For the lower bound,
1
t
N(t)

j=1
R
j
=
N(t)
t
1
N(t)
N(t)

j=1
R
j
;
N(t)/t 1/E(X) by the elementary renewal theorem, while
1
N(t)

N(t)
j=1
R
j
E(R) by the
strong law of large numbers since the R
j
are iid and have nite rst moment by the theorems
hypothesis.
Similarly, for the upper bound, noting that (N(t) + 1)/t 1/E(X) (since 1/t 0), and
rewriting the upper bound as
N(t) + 1
t
1
N(t) + 1
N(t)+1

j=1
R
j
,
we obtain the same limit for this upper bound, thus completing the proof of (1) in this non-
negative case. Note in passing from (3), that because the upper bound converges to the same
(nite) limt as the lower bound, it must hold that
R
N(t)+1
t
0, wp1, (4)
because R
N(t)+1
/t is precisely the dierence between the upper and lower bounds.
If f is not non-negative, then rst observe that since |f| is non-negative, we can apply
the above proof to
1
t
_
t
0
|f(X(s))|ds to conclude from (4) that R

N(t)+1
/t 0, wp1, where
R

j
=
_

j1
|f(X(s))|ds, j 1. The proof of (1) is then completed by writing
1
t
_
t
0
f(X(s))ds =
1
t
N(t)

j=1
R
j
+
1
t
_
t
t
N(t)
f(X(s))ds,
noting that the rst piece on the rhs converges to the desired answer, and the second piece (the
error) tends to 0 by (4),

1
t
_
t
t
N(t)
f(X(s))ds


1
t
_
t
t
N(t)
|f(X(s))|ds
1
t
_
t
N(t)+1
t
N(t)
|f(X(s))|ds =
R

N(t)+1
t
0.
(E(|R
j
|) < since |R
j
| R

j
and E(R

j
) < by the theorems hypothesis.)
3
To prove (2) we must show that {R(t)/t : t 1} is uniformly integrable (UI), where
R(t)
def
=
_
t
0
f(X(s))ds. Since |R(t)|
_
t
0
|f(X(s))|ds, it follows that
|R(t)|/t Y (t) =
1
t
N(t)+1

j=1
R

j
. (5)
We know that Y (t) E(R

)/E(X) < , wp1. Moreover, N(t) + 1 is a stopping time with


respect to {X
j
, R

j
}, so from Walds equation E(Y (t)) = E(N(t)/t+1/t)E(R

) E(R

)/E(X)
(via the elementary renewal theorem); thus {Y (t) : t 1} is uniformly integrable (UI)
1
and
since |R(t)|/t Y (t), {R(t)/t : t 1} is UI as well.
As an immediate application of Theorem 2.1 take f(x) = I{x b} with b constant. Such a
function is non-negative and bounded, so the hypothesis of Theorem 2.1 is immediately veried.
Corollary 2.1 A positive recurrent regenerative process has a limiting distribution: Letting X

denote a rv with this distribution, we have


P(X

b) = lim
t
1
t
_
t
0
I{X(s) b}ds =
E(R)
E(X)
, wp1, b R
where R =
_
X
1
0
I{X(s) b}ds, and X = X
1
. Also,
P(X

b) = lim
t
1
t
_
t
0
P(X(s) b)ds =
E(R)
E(X)
, b R.
2.2 Weak convergence to the limiting distribution
Theorem 2.2 A positive recurrent regenerative process with a non-lattice cycle length distri-
bution, F(x) = P(X
1
x), converges weakly to its limiting distribution: as t ,
X(t) =X

,
where X

has the limiting distribution given in Corollary 2.1.


Proof : The proof is a straightforward application of the key renewal theorem, where we condi-
tion on the rst regeneration time,
1
, to obtain a renewal equation. Letting f be any bounded
continuous function such that max
x
|f(x)| 1, we must show that E(f(X(t)) E(f(X

)).
Conditioning on
1
> t and
1
= s t yields (in the usual manner)
E(f(X(t)) = E(f(X(t);
1
> t) +
_
t
0
E(f(X(t s))dF(s).
The function Q(t) = E(f(X(t);
1
> t) can be shown to be directly Reimann integrable
(DRI) because it is bounded by P(
1
> t) (itself already DRI) and is right-continuous (by the
1
A collection {X(t)} of non-negative rvs for which X(t) X wp1 with E(X) < is UI if and only if
E(X(t)) E(X).
4
continuity and boundedness of f and the always assumed right-continuity of the paths of X.).
Thus
E(f(X(t)) {E(
1
)}
1
_

0
E(f(X(t);
1
> t)dt.
The limit can be re-written (via Fubinis theorem) as
{E(
1
)}
1
E
__

1
0
f(X(t))dt
_
;
the same limit as in Theorem 2.1.
2.3 Delayed regenerative processes
Given a regenerative process, X = {X(t) : t 0}, it is natural to want to begin observing it
starting at some future time t = s (say) (random or deterministic) as opposed to at time t = 0;
we thus would consider the process {X(s + t) : t 0}. But if we do that, then the rst cycle
will be a remaining cycle, and thus not be the same (in distribution) as a typical future iid
one. More generally, we can take any regenerative process, and simply replace the rst cycle by
something dierent in distribution from (but independent of) the others, and yet it will keep
regenerating once the rst cycle ends.
A regenerative process in which the rst cycle has been replaced by one with a dierent
distribution from (but is still independent of) the iid cycles following it is called a delayed
regenerative process with delayed initial cycle C
0
and delayed initial cycle length
0
. Thus,
there is an initial random time 0
0
<
1
, and the initial cycle is given by C
0
= {{X(t) : 0
t <
0
},
0
}. X
1
=
1

0
, and the next cycle is C
1
= {{X(
0
+ t) : 0 t < X
1
}, X
1
}, and
so on. C
0
is independent of the other iid cycles {C
k
: k 1}, but is allowed to have a dierent
distribution. The renewal process {
n
: n 0} is a delayed renewal process with initial delay

0
. (We assume that P(
0
> 0) > 0.)
When there is no delay cycle, a regenerative process is called non-delayed. Note that once the
delay is nished, a delayed regenerative process becomes non-delayed, thus {X(
0
+t) : t 0} is
a regular non-delayed regenerative process, called a non-delayed version. A delayed regenerative
process is called positive recurrent if its non-delayed version is so, that is, if 0 < E(X
1
) < .
It is not required that
0
have nite rst moment, but we of course assume that it is a proper
rv, that is, that P(
0
< ) = 1, so that C
0
will end wp1.
As a common example, given a positive recurrent CTMC and using visits to state 1 as
regeneration times, what if it started in state 2 instead? Clearly, when X(0) = 2, once the
chain enters state 1 (which it will by recurrence), we are back to the non-delayed case; the
delay is short lived. In this case
0
=
2,1
= inf{t > 0 : X(t) = 1 |X(0) = 2}, the hitting time
to state 1 given X(0) = 2, and C
0
captures the evolution of the chain until it hits state 1.
Since a delay cycle C
0
eventually ends, limiting results such as Theorem 2.1 should and
do remain valid with some minor regularity conditions places on C
0
ensuring that its eect is
asymptotically negligible as t . We deal with this next. The reward and cycle used in such
results must now be the rst iid one, R = R
1
=
_

0
+X
1

0
f(X(s))ds of length X
1
, not of course
the delay one, R
0
=
_

0
0
f(X(s))ds of length
0
.
Corollary 2.1 remains valid without any further conditions:
5
Proposition 2.1 A delayed positive recurrent regenerative process (with no conditions placed
on the delay cycle C
0
) has the same limiting distribution as its non-delayed counterpart: Letting
X

denote a rv with this distribution, we have


P(X

b) = lim
t
1
t
_
t
0
I{X(s) b}ds =
E(R)
E(X)
, wp1, b R
where R = R
1
=
_

0
+X
1

0
I{X(s) b}ds, and X = X
1
. Also,
P(X

b) = lim
t
1
t
_
t
0
P(X(s) b)ds =
E(R)
E(X)
, b R.
Proof : For t > t
0
, we have wp1,
1
t
_
t
0
I{X(s) b}ds =
R
0
t
+
1
t
_
t

0
I{X(s) b}ds,
where R
0
=
_

0
0
I{X(s) b}ds
0
; thus
R
0
t
0, wp1. The second piece is a non-delayed
version, and can be re-written as
t
0
t
1
t
0
_
t

0
I{X(s) b}ds,
hence converges wp1, as t , to the desired
E(R)
E(X)
by the rst part of Corollary 2.1. Taking
expected value of
1
t
_
t
0
I{X(s) b}ds, and applying the bounded convergence theorem then
yields the second result.
In general, some further conditions on C
0
are needed to obtain Theorem 2.1: The only
further condition needed for (1) is
_

0
0
|f(X(s))|ds < , wp1, (6)
and for (2) a sucient condition is
E
_
_

0
0
|f(X(s))|ds
_
< . (7)
Theorem 2.3 If X is a delayed positive recurrent regenerative process, and f = f(x) is a
(measurable) function such that E
_
_

0
+X
1

0
|f(X(s))|ds
_
< , and also (6) holds, then
lim
t
1
t
_
t
0
f(X(s))ds =
E(R)
E(X)
, wp1, (8)
and if additionally (7) holds, then also
lim
t
1
t
_
t
0
E[f(X(s))]ds =
E(R)
E(X)
, (9)
where R = R
1
=
_

0
+X
1

0
f(X(s))ds, and X = X
1
.
6
Proof : R
0
=
_

0
0
f(X(s))ds and the proof of (8) is identical to the proof in Proposition 2.1
with condition (6) ensuring that P(|R
0
| < ) = 1, so that R
0
/t 0.
For (9) we argue UI as we did in proving (2), modifying the upper bound in (5) to a
stochastic upper bound
|R(t)|/t K(t) =
R

0
t
+Y (t),
where R

0
=
_

0
0
|f(X(s))|ds is independent of {Y (t)}, and Y (t) is constructed from a non-
delayed version, and hence (as we proved) E(Y (t))
E(R)
E(X)
. Since E(R

0
) < by (7), it follows
that E(R

0
)/t 0 and hence {K(t)} is UI; thus so is {R(t)/t} as was to be shown.
That (7) is not necessary for (9) can be seen by Proposition 2.1. That is because whenever
f is non-negative and bounded (such as f(x) = I{x b}), (6) is automatically satised, but
(7) might not be satised even though (9) still holds (by applying the bounded convergence
theorem to (8)).
Finally Theorem 2.2 remains valid (the proof is the same, we leave it out):
Theorem 2.4 With no extra conditions placed on the delay distribution P(
0
x), a posi-
tive recurrent delayed regenerative process with a non-lattice cycle length distribution, F(x) =
P(X
1
x), weakly converges to its limiting distribution: as t ,
X(t) =X

,
where X

has the limiting distribution given in Corollary 2.1.


2.4 Stationary versions of a regenerative processes
Given a positive recurrent regenerative process X = {X(t) : t 0}, we know that the 1-
dimensional X(t) always has a limiting distribution from Corollary 2.1. But that is only a
marginal distribution. It turns out that by creating a particular initial delay cycle C

0
, with
its corresponding initial delay t

0
, we can make the regenerative process a stationary process,
called a stationary version of the process, denoted by X

= {X

(t) : t 0}. For every xed


s 0 it holds that X

(s) has the distribution given in Corollary 2.1, but much more is true:
For every xed s 0, the entire shifted process {X

(s +t) : t 0} has the same distribution;


the same as {X

(t) : t 0}. This means that all joint distributions are the same. For example
(X

(s +t
1
), X

(s +t
2
), . . . , X(s +t
k
)) has the same distribution as (X

(t
1
), X

(t
2
), . . . , X(t
k
))
for any s 0. In particular, X

(s) has the same distribution at X

(0) for all s 0, the


distribution in Corollary 2.1. As an easy example, consider a positive recurrent CTMC, with
stationary probabilities {P
n
} (solution to the balance equations). Then, as we know, by starting
o X(0) randomly distributed as {P
n
}, P(X(0) = n) = P
n
, the chain is stationary. Taking this
stationary version, we can still use visits to a xed state i as regeneration points; we choose

0
= inf{t 0 : X(t) = i} and C

0
is simply the evolution of the chain during [0,

0
).
In general, though, how do we nd C

0
? First we must recall that the underlying renewal
process has a stationary version by using delay
0
F
e
, and we obtained F
e
(as a limiting
distribution) by randomly observing the renewal process at a time point way out in the innite
future. We now do the same thing for the entire regenerative process: We observe X
s
=
7
{X(s+t) : t 0} in which the origin (time s) is randomly chosen way out in the innite future.
(We choose s uniformly over (0, t) and then let t .) For each s, let C
0
(s) denote the delay
cycle associated with X
s
. It is precisely the remaining cycle of the one covering time s. (It
is a generalization of forward recurrence time.) It has delay length
0
(s) = A(s), the forward
recurrence time. For each s 0, the only part of X
s
that can change in distribution is this
delay cycle C
0
(s) because all the others are iid regular ones. Thus we get a stationary version of
the regenerative process if and only if the distribution of C
0
(s) is the same for all s 0. (This
is the same basic argument we used to produce a stationary version of a renewal process.)

0
(s) = A(s) we already know has F
e
as its limiting distribution from our Lectures Notes on
renewal theory. It turns out that the entire cycle C
0
(s) has a limiting distribution. Letting C

0
=
{{X

(t) : 0 t <

0
},

0
} denote such a limiting delay cycle with this limiting distribution,
the length

0
must of course be distributed as F
e
. It turns out that the entire distribution of
C

0
, denoted by P(C

0
B) (for appropriate path sets B), is given by
P(C

0
B) = lim
t
1
t
_
t
0
P(C
0
(s) B)ds =
E(R)
E(X)
, (10)
where R =
_
X
1
0
I{C
0
(s) B}ds, and X = X
1
. This is because {C
0
(s) : s 0} forms a
positive recurrent regenerative process, with the same regeneration times as X but with a more
general state space, and the theory remains valid for such general state spaces; thus we are
merely applying a more general form of Corollary 2.1. At a regeneration time
k
, k 1, the
distribution of C
0
(
k
) is that of a regular iid cycle C
k
.
The same basic renewal reward proof works for obtaining (10):
1
t
_
t
0
I{C
0
(s) B}ds
1
t
N(t)

j=1
R
j
,
where R
j
=
_

j1
I{C
0
(s) B}ds.
That C

0
(u) (the delay cycle at time u from X

) has the same distribution for all u 0,


follows by the same argument used to show that A

(u) has the same distribution for all u 0:


Once out at , going u time units further does not change the distribution;
P(C

0
(u) B) = lim
t
1
t
_
t
0
P(C
0
(s +u) B)ds = lim
t
1
t
_
t
0
P(C
0
(s) B)ds. (11)
2.5 Time averages as expected values under the stationary distribution
A general result is that time averages are equal to the expected value under the stationary
distribution. Letting X

= {X

(t) : t 0} denote a stationary version, so that X

(0) has the


stationary distribution in Corollary 2.1, we next state
Theorem 2.5 If X is a positive recurrent regenerative process, and f = f(x) is a (measurable)
function such that E
_
_
X
1
0
|f(X(s))|ds
_
< , then
lim
t
1
t
_
t
0
f(X(s))ds = E(f(X

(0))), wp1. (12)


8
Let us see some examples: In particular, using f(x) = x yields
lim
t
1
t
_
t
0
X(s)ds = E(X

(0)), wp1. (13)


As an example of this, consider a non-negative positive recurrent CTMC, and suppose we
solved the balance equations and found the stationary probabilities {P
n
}. Then (13) says
lim
t
1
t
_
t
0
X(s)ds = E(X

(0)) =

n
nP
n
. (14)
To prove (14) directly, recall that wp1,
P
n
= lim
t
1
t
_
t
0
I{X(s) = n}ds.
Now let I
n
(t) =
_
t
0
I{X(s) = n}ds; thus I
n
(t)/t P
n
. Then
1
t
_
t
0
X(s)ds =
1
t

n=0
nI
n
(t)
=

n
n(I
n
(t)/t)

n
nP
n
.
The interchange of limit and innite sum can be justied.
For the more general (13), once again assume non-negativity. Then E(X

(0)) =
_

0
P(X

(0) >
b)db, and we also know that P(X

(0) > b) = lim


t
1
t
_
t
0
I{X(s) > b}ds, we thus obtain
(changing the order of integration and limits; allowed by the bounded convergence theorem and
Tonellis theorem):
E(X

(0)) =
_

0
P(X

(0) > b)db


=
_

0
lim
t
1
t
_
t
0
I{X(s) > b}dsdb
= lim
t
1
t
_
t
0
_

0
I{X(s) > b}dbdt
= lim
t
1
t
_
t
0
_
X(s)
0
dbdt
= lim
t
1
t
_
t
0
X(s)ds.
More generally: From Theorem 2.3, we know that the two versions have the same limit;
lim
t
1
t
_
t
0
f(X(s))ds = lim
t
1
t
_
t
0
f(X

(s))ds = wp1. (15)


9
and (assuming some conditions) we can obtain the same limit by taking expected values too.
But by stationarity E(f(X

(s))) = E(f(X

(0)), s 0, a constant; thus


lim
t
1
t
_
t
0
E(f(X

(s)))ds = lim
t
1
t
_
t
0
E(f(X

(0)))ds = E(f(X

(0)));
we have derived (12).
Discrete-time regenerative processes
For a stochastic sequence {Y
n
: n 0}, the denition of regenerative is the same. In this case,
C
1
= {{Y
n
: 0 n
1
1},
1
}, and the renewal process is a discrete time renewal process;
N(n)/n 1/E(X) as n . Because of the discrete setting, the k
th
cycle begins at time

k1
and ends at time
k
1, and there are X
k
values of the Y
n
in the cycle. The analog of
Theorem 2.1 is
Theorem 2.6 If Y is a positive recurrent regenerative sequence, and f = f(x) is a (measur-
able) function such that E
_

X
1
1
j=0
|Y
j
|
_
< , then
lim
n
1
n
n

j=1
f(Y
j
) =
E(R)
E(X)
, wp1, (16)
and
lim
n
1
n
n

j=1
E(f(Y
j
)) =
E(R)
E(X)
, (17)
where R = R
1
=

X
1
1
j=0
f(Y
j
), and X = X
1
.
10

You might also like