12 1. INTRODUCTION TO FINITE MARKOV CHAINS 1.5.3. Existence of a stationary distribution. The Convergence Theo- rem (Theorem 4.9 below) implies that the “long-term” fractions of time a finite irreducible aperiodic Markov chain spends in each state coincide with the chain’s stationary distribution. However, we have not yet demonstrated that stationary distributions exist! To build a candidate distribution, we consider a sojourn of the chain from some arbitrary state z back to z. Since visits to z break up the trajec- tory of the chain into identically distributed segments, it should not be surprising that the average fraction of time per segment spent in each state y coincides with the “long-term” fraction of time spent in y. Proposition 1.14. Let P be the transition matrix of an irreducible Markov chain. Then (i) there exists a probability distribution π on such that π = πP and π(x) 0 for all x Ω, and moreover, (ii) π(x) = 1 Ex(τx +) . Remark 1.15. We will see in Section 1.7 that existence of π does not need irreducibility, but positivity does. Proof. Let z be an arbitrary state of the Markov chain. We will closely examine the time the chain spends, on average, at each state in between visits to z. Hence define ˜(y) π := Ez(number of visits to y before returning to z) = t=0 Pz{Xt = y, τz + t}. (1.19) For any state y, we have ˜(y) π Ezτz +. Hence Lemma 1.13 ensures that ˜(y) π for all y Ω. We check that ˜ π is stationary, starting from the definition: x∈Ω ˜(x)P π (x, y) = x∈Ω t=0 Pz{Xt = x, τz + t}P (x, y). (1.20) Because the event {τz + t + 1} = {τz + t} is determined by X0, . . . , Xt, Pz{Xt = x, Xt+1 = y, τz + t + 1} = Pz{Xt = x, τz + t + 1}P (x, y). (1.21) Reversing the order of summation in (1.20) and using the identity (1.21) shows that x∈Ω ˜(x)P π (x, y) = t=0 Pz{Xt+1 = y, τz + t + 1} = t=1 Pz{Xt = y, τz + t}. (1.22)
Previous Page Next Page