= 0 has the same distribution as X and is independent of the past. There must be a renewal process of such times t k : k >= 1 (with t 0 def = 0).">= 0 has the same distribution as X and is independent of the past. There must be a renewal process of such times t k : k >= 1 (with t 0 def = 0).">
More Web Proxy on the site http://driver.im/1 Regenerative Processes: 1.1 Examples
1 Regenerative Processes: 1.1 Examples
1 Regenerative Processes: 1.1 Examples
j=1
R
j
,
where
R
j
=
_
j
j1
X(s)ds.
So we are simply interpreting X(t) as the rate at which we earn a reward at time t and applying
the renewal reward theorem. The (R
j
, X
j
) are iid since they are constructed from iid cycles;
(R
j
, X
j
) is constructed from C
j
.
2.1 Functions of regenerative processes are regenerative with the same re-
generation times
If X is regenerative with regeneration times
k
, then it is immediate that for any (measurable)
function f = f(x), the process Y (t) = f(X(t)) denes a regenerative process with the very
same regeneration times. This allows us to obtain
lim
t
1
t
_
t
0
f(X(s))ds =
E(R)
E(X)
, wp1,
where X = X
1
and R = R
1
=
_
X
0
f(X(s))ds.
We next precisely state and prove the main result. The proof mimics quite closely the proof
of the renewal reward theorem.
Theorem 2.1 If X is a positive recurrent regenerative process, and f = f(x) is a (measurable)
function such that E
_
_
X
1
0
|f(X(s))|ds
_
< , then
lim
t
1
t
_
t
0
f(X(s))ds =
E(R)
E(X)
, wp1, (1)
and
lim
t
1
t
_
t
0
E[f(X(s))]ds =
E(R)
E(X)
, (2)
where R = R
1
=
_
X
1
0
f(X(s))ds, and X = X
1
.
2
Proof : Let N(t) = max{j :
j
t} denote the number of renewals (regenerations) by time t.
First note that if f 0, then since t
N(t)
t < t
N(t)
+ 1, by dening R
j
=
_
j1
f(X(s))ds, (1)
can be derived by using the following sandwiching bounds (upper and lower bounds),
1
t
N(t)
j=1
R
j
1
t
_
t
0
f(X(s))ds
1
t
N(t)+1
j=1
R
j
. (3)
Both these bounds converge to E(R)/E(X) yielding the result: For the lower bound,
1
t
N(t)
j=1
R
j
=
N(t)
t
1
N(t)
N(t)
j=1
R
j
;
N(t)/t 1/E(X) by the elementary renewal theorem, while
1
N(t)
N(t)
j=1
R
j
E(R) by the
strong law of large numbers since the R
j
are iid and have nite rst moment by the theorems
hypothesis.
Similarly, for the upper bound, noting that (N(t) + 1)/t 1/E(X) (since 1/t 0), and
rewriting the upper bound as
N(t) + 1
t
1
N(t) + 1
N(t)+1
j=1
R
j
,
we obtain the same limit for this upper bound, thus completing the proof of (1) in this non-
negative case. Note in passing from (3), that because the upper bound converges to the same
(nite) limt as the lower bound, it must hold that
R
N(t)+1
t
0, wp1, (4)
because R
N(t)+1
/t is precisely the dierence between the upper and lower bounds.
If f is not non-negative, then rst observe that since |f| is non-negative, we can apply
the above proof to
1
t
_
t
0
|f(X(s))|ds to conclude from (4) that R
N(t)+1
/t 0, wp1, where
R
j
=
_
j1
|f(X(s))|ds, j 1. The proof of (1) is then completed by writing
1
t
_
t
0
f(X(s))ds =
1
t
N(t)
j=1
R
j
+
1
t
_
t
t
N(t)
f(X(s))ds,
noting that the rst piece on the rhs converges to the desired answer, and the second piece (the
error) tends to 0 by (4),
1
t
_
t
t
N(t)
f(X(s))ds
1
t
_
t
t
N(t)
|f(X(s))|ds
1
t
_
t
N(t)+1
t
N(t)
|f(X(s))|ds =
R
N(t)+1
t
0.
(E(|R
j
|) < since |R
j
| R
j
and E(R
j
) < by the theorems hypothesis.)
3
To prove (2) we must show that {R(t)/t : t 1} is uniformly integrable (UI), where
R(t)
def
=
_
t
0
f(X(s))ds. Since |R(t)|
_
t
0
|f(X(s))|ds, it follows that
|R(t)|/t Y (t) =
1
t
N(t)+1
j=1
R
j
. (5)
We know that Y (t) E(R
j
}, so from Walds equation E(Y (t)) = E(N(t)/t+1/t)E(R
) E(R
)/E(X)
(via the elementary renewal theorem); thus {Y (t) : t 1} is uniformly integrable (UI)
1
and
since |R(t)|/t Y (t), {R(t)/t : t 1} is UI as well.
As an immediate application of Theorem 2.1 take f(x) = I{x b} with b constant. Such a
function is non-negative and bounded, so the hypothesis of Theorem 2.1 is immediately veried.
Corollary 2.1 A positive recurrent regenerative process has a limiting distribution: Letting X
b) = lim
t
1
t
_
t
0
I{X(s) b}ds =
E(R)
E(X)
, wp1, b R
where R =
_
X
1
0
I{X(s) b}ds, and X = X
1
. Also,
P(X
b) = lim
t
1
t
_
t
0
P(X(s) b)ds =
E(R)
E(X)
, b R.
2.2 Weak convergence to the limiting distribution
Theorem 2.2 A positive recurrent regenerative process with a non-lattice cycle length distri-
bution, F(x) = P(X
1
x), converges weakly to its limiting distribution: as t ,
X(t) =X
,
where X
)).
Conditioning on
1
> t and
1
= s t yields (in the usual manner)
E(f(X(t)) = E(f(X(t);
1
> t) +
_
t
0
E(f(X(t s))dF(s).
The function Q(t) = E(f(X(t);
1
> t) can be shown to be directly Reimann integrable
(DRI) because it is bounded by P(
1
> t) (itself already DRI) and is right-continuous (by the
1
A collection {X(t)} of non-negative rvs for which X(t) X wp1 with E(X) < is UI if and only if
E(X(t)) E(X).
4
continuity and boundedness of f and the always assumed right-continuity of the paths of X.).
Thus
E(f(X(t)) {E(
1
)}
1
_
0
E(f(X(t);
1
> t)dt.
The limit can be re-written (via Fubinis theorem) as
{E(
1
)}
1
E
__
1
0
f(X(t))dt
_
;
the same limit as in Theorem 2.1.
2.3 Delayed regenerative processes
Given a regenerative process, X = {X(t) : t 0}, it is natural to want to begin observing it
starting at some future time t = s (say) (random or deterministic) as opposed to at time t = 0;
we thus would consider the process {X(s + t) : t 0}. But if we do that, then the rst cycle
will be a remaining cycle, and thus not be the same (in distribution) as a typical future iid
one. More generally, we can take any regenerative process, and simply replace the rst cycle by
something dierent in distribution from (but independent of) the others, and yet it will keep
regenerating once the rst cycle ends.
A regenerative process in which the rst cycle has been replaced by one with a dierent
distribution from (but is still independent of) the iid cycles following it is called a delayed
regenerative process with delayed initial cycle C
0
and delayed initial cycle length
0
. Thus,
there is an initial random time 0
0
<
1
, and the initial cycle is given by C
0
= {{X(t) : 0
t <
0
},
0
}. X
1
=
1
0
, and the next cycle is C
1
= {{X(
0
+ t) : 0 t < X
1
}, X
1
}, and
so on. C
0
is independent of the other iid cycles {C
k
: k 1}, but is allowed to have a dierent
distribution. The renewal process {
n
: n 0} is a delayed renewal process with initial delay
0
. (We assume that P(
0
> 0) > 0.)
When there is no delay cycle, a regenerative process is called non-delayed. Note that once the
delay is nished, a delayed regenerative process becomes non-delayed, thus {X(
0
+t) : t 0} is
a regular non-delayed regenerative process, called a non-delayed version. A delayed regenerative
process is called positive recurrent if its non-delayed version is so, that is, if 0 < E(X
1
) < .
It is not required that
0
have nite rst moment, but we of course assume that it is a proper
rv, that is, that P(
0
< ) = 1, so that C
0
will end wp1.
As a common example, given a positive recurrent CTMC and using visits to state 1 as
regeneration times, what if it started in state 2 instead? Clearly, when X(0) = 2, once the
chain enters state 1 (which it will by recurrence), we are back to the non-delayed case; the
delay is short lived. In this case
0
=
2,1
= inf{t > 0 : X(t) = 1 |X(0) = 2}, the hitting time
to state 1 given X(0) = 2, and C
0
captures the evolution of the chain until it hits state 1.
Since a delay cycle C
0
eventually ends, limiting results such as Theorem 2.1 should and
do remain valid with some minor regularity conditions places on C
0
ensuring that its eect is
asymptotically negligible as t . We deal with this next. The reward and cycle used in such
results must now be the rst iid one, R = R
1
=
_
0
+X
1
0
f(X(s))ds of length X
1
, not of course
the delay one, R
0
=
_
0
0
f(X(s))ds of length
0
.
Corollary 2.1 remains valid without any further conditions:
5
Proposition 2.1 A delayed positive recurrent regenerative process (with no conditions placed
on the delay cycle C
0
) has the same limiting distribution as its non-delayed counterpart: Letting
X
b) = lim
t
1
t
_
t
0
I{X(s) b}ds =
E(R)
E(X)
, wp1, b R
where R = R
1
=
_
0
+X
1
0
I{X(s) b}ds, and X = X
1
. Also,
P(X
b) = lim
t
1
t
_
t
0
P(X(s) b)ds =
E(R)
E(X)
, b R.
Proof : For t > t
0
, we have wp1,
1
t
_
t
0
I{X(s) b}ds =
R
0
t
+
1
t
_
t
0
I{X(s) b}ds,
where R
0
=
_
0
0
I{X(s) b}ds
0
; thus
R
0
t
0, wp1. The second piece is a non-delayed
version, and can be re-written as
t
0
t
1
t
0
_
t
0
I{X(s) b}ds,
hence converges wp1, as t , to the desired
E(R)
E(X)
by the rst part of Corollary 2.1. Taking
expected value of
1
t
_
t
0
I{X(s) b}ds, and applying the bounded convergence theorem then
yields the second result.
In general, some further conditions on C
0
are needed to obtain Theorem 2.1: The only
further condition needed for (1) is
_
0
0
|f(X(s))|ds < , wp1, (6)
and for (2) a sucient condition is
E
_
_
0
0
|f(X(s))|ds
_
< . (7)
Theorem 2.3 If X is a delayed positive recurrent regenerative process, and f = f(x) is a
(measurable) function such that E
_
_
0
+X
1
0
|f(X(s))|ds
_
< , and also (6) holds, then
lim
t
1
t
_
t
0
f(X(s))ds =
E(R)
E(X)
, wp1, (8)
and if additionally (7) holds, then also
lim
t
1
t
_
t
0
E[f(X(s))]ds =
E(R)
E(X)
, (9)
where R = R
1
=
_
0
+X
1
0
f(X(s))ds, and X = X
1
.
6
Proof : R
0
=
_
0
0
f(X(s))ds and the proof of (8) is identical to the proof in Proposition 2.1
with condition (6) ensuring that P(|R
0
| < ) = 1, so that R
0
/t 0.
For (9) we argue UI as we did in proving (2), modifying the upper bound in (5) to a
stochastic upper bound
|R(t)|/t K(t) =
R
0
t
+Y (t),
where R
0
=
_
0
0
|f(X(s))|ds is independent of {Y (t)}, and Y (t) is constructed from a non-
delayed version, and hence (as we proved) E(Y (t))
E(R)
E(X)
. Since E(R
0
) < by (7), it follows
that E(R
0
)/t 0 and hence {K(t)} is UI; thus so is {R(t)/t} as was to be shown.
That (7) is not necessary for (9) can be seen by Proposition 2.1. That is because whenever
f is non-negative and bounded (such as f(x) = I{x b}), (6) is automatically satised, but
(7) might not be satised even though (9) still holds (by applying the bounded convergence
theorem to (8)).
Finally Theorem 2.2 remains valid (the proof is the same, we leave it out):
Theorem 2.4 With no extra conditions placed on the delay distribution P(
0
x), a posi-
tive recurrent delayed regenerative process with a non-lattice cycle length distribution, F(x) =
P(X
1
x), weakly converges to its limiting distribution: as t ,
X(t) =X
,
where X
0
, with
its corresponding initial delay t
0
, we can make the regenerative process a stationary process,
called a stationary version of the process, denoted by X
= {X
(s) has the distribution given in Corollary 2.1, but much more is true:
For every xed s 0, the entire shifted process {X
(t) : t 0}. This means that all joint distributions are the same. For example
(X
(s +t
1
), X
(s +t
2
), . . . , X(s +t
k
)) has the same distribution as (X
(t
1
), X
(t
2
), . . . , X(t
k
))
for any s 0. In particular, X
0
= inf{t 0 : X(t) = i} and C
0
is simply the evolution of the chain during [0,
0
).
In general, though, how do we nd C
0
? First we must recall that the underlying renewal
process has a stationary version by using delay
0
F
e
, and we obtained F
e
(as a limiting
distribution) by randomly observing the renewal process at a time point way out in the innite
future. We now do the same thing for the entire regenerative process: We observe X
s
=
7
{X(s+t) : t 0} in which the origin (time s) is randomly chosen way out in the innite future.
(We choose s uniformly over (0, t) and then let t .) For each s, let C
0
(s) denote the delay
cycle associated with X
s
. It is precisely the remaining cycle of the one covering time s. (It
is a generalization of forward recurrence time.) It has delay length
0
(s) = A(s), the forward
recurrence time. For each s 0, the only part of X
s
that can change in distribution is this
delay cycle C
0
(s) because all the others are iid regular ones. Thus we get a stationary version of
the regenerative process if and only if the distribution of C
0
(s) is the same for all s 0. (This
is the same basic argument we used to produce a stationary version of a renewal process.)
0
(s) = A(s) we already know has F
e
as its limiting distribution from our Lectures Notes on
renewal theory. It turns out that the entire cycle C
0
(s) has a limiting distribution. Letting C
0
=
{{X
(t) : 0 t <
0
},
0
} denote such a limiting delay cycle with this limiting distribution,
the length
0
must of course be distributed as F
e
. It turns out that the entire distribution of
C
0
, denoted by P(C
0
B) (for appropriate path sets B), is given by
P(C
0
B) = lim
t
1
t
_
t
0
P(C
0
(s) B)ds =
E(R)
E(X)
, (10)
where R =
_
X
1
0
I{C
0
(s) B}ds, and X = X
1
. This is because {C
0
(s) : s 0} forms a
positive recurrent regenerative process, with the same regeneration times as X but with a more
general state space, and the theory remains valid for such general state spaces; thus we are
merely applying a more general form of Corollary 2.1. At a regeneration time
k
, k 1, the
distribution of C
0
(
k
) is that of a regular iid cycle C
k
.
The same basic renewal reward proof works for obtaining (10):
1
t
_
t
0
I{C
0
(s) B}ds
1
t
N(t)
j=1
R
j
,
where R
j
=
_
j1
I{C
0
(s) B}ds.
That C
0
(u) (the delay cycle at time u from X
0
(u) B) = lim
t
1
t
_
t
0
P(C
0
(s +u) B)ds = lim
t
1
t
_
t
0
P(C
0
(s) B)ds. (11)
2.5 Time averages as expected values under the stationary distribution
A general result is that time averages are equal to the expected value under the stationary
distribution. Letting X
= {X
(0)) =
n
nP
n
. (14)
To prove (14) directly, recall that wp1,
P
n
= lim
t
1
t
_
t
0
I{X(s) = n}ds.
Now let I
n
(t) =
_
t
0
I{X(s) = n}ds; thus I
n
(t)/t P
n
. Then
1
t
_
t
0
X(s)ds =
1
t
n=0
nI
n
(t)
=
n
n(I
n
(t)/t)
n
nP
n
.
The interchange of limit and innite sum can be justied.
For the more general (13), once again assume non-negativity. Then E(X
(0)) =
_
0
P(X
(0) >
b)db, and we also know that P(X
(0)) =
_
0
P(X
(s))) = E(f(X
(s)))ds = lim
t
1
t
_
t
0
E(f(X
(0)))ds = E(f(X
(0)));
we have derived (12).
Discrete-time regenerative processes
For a stochastic sequence {Y
n
: n 0}, the denition of regenerative is the same. In this case,
C
1
= {{Y
n
: 0 n
1
1},
1
}, and the renewal process is a discrete time renewal process;
N(n)/n 1/E(X) as n . Because of the discrete setting, the k
th
cycle begins at time
k1
and ends at time
k
1, and there are X
k
values of the Y
n
in the cycle. The analog of
Theorem 2.1 is
Theorem 2.6 If Y is a positive recurrent regenerative sequence, and f = f(x) is a (measur-
able) function such that E
_
X
1
1
j=0
|Y
j
|
_
< , then
lim
n
1
n
n
j=1
f(Y
j
) =
E(R)
E(X)
, wp1, (16)
and
lim
n
1
n
n
j=1
E(f(Y
j
)) =
E(R)
E(X)
, (17)
where R = R
1
=
X
1
1
j=0
f(Y
j
), and X = X
1
.
10