14 1. INTRODUCTION TO FINITE MARKOV CHAINS A function is harmonic on D if it is harmonic at every state x D. If h is regarded as a column vector, then a function which is harmonic on all of satisfies the matrix equation P h = h. Lemma 1.16. Suppose that P is irreducible. A function h which is harmonic at every point of is constant. Proof. Since is finite, there must be a state x0 such that h(x0) = M is maximal. If for some state z such that P (x0, z) 0 we have h(z) M, then h(x0) = P (x0, z)h(z) + y=z P (x0, y)h(y) M, (1.29) a contradiction. It follows that h(z) = M for all states z such that P (x0, z) 0. For any y Ω, irreducibility implies that there is a sequence x0, x1, . . . , xn = y with P (xi, xi+1) 0. Repeating the argument above tells us that h(y) = h(xn−1) = ··· = h(x0) = M. Thus h is constant. Corollary 1.17. Let P be the transition matrix of an irreducible Markov chain. There exists a unique probability distribution π satisfying π = πP . Proof. By Proposition 1.14 there exists at least one such measure. Lemma 1.16 implies that the kernel of P I has dimension 1, so the column rank of P I is |Ω|− 1. Since the row rank of any square matrix is equal to its column rank, the row-vector equation ν = νP also has a one-dimensional space of solutions. This space contains only one vector whose entries sum to 1. Remark 1.18. Another proof of Corollary 1.17 follows from the Convergence Theorem (Theorem 4.9, proved below). Another simple direct proof is suggested in Exercise 1.13. 1.6. Reversibility and Time Reversals Suppose a probability π on satisfies π(x)P (x, y) = π(y)P (y, x) for all x, y Ω. (1.30) The equations (1.30) are called the detailed balance equations. Proposition 1.19. Let P be the transition matrix of a Markov chain with state space Ω. Any distribution π satisfying the detailed balance equations (1.30) is stationary for P . Proof. Sum both sides of (1.30) over all y: y∈Ω π(y)P (y, x) = y∈Ω π(x)P (x, y) = π(x), since P is stochastic. Checking detailed balance is often the simplest way to verify that a particular distribution is stationary. Furthermore, when (1.30) holds, π(x0)P (x0, x1) ··· P (xn−1, xn) = π(xn)P (xn, xn−1) ··· P (x1, x0). (1.31) We can rewrite (1.31) in the following suggestive form: Pπ{X0 = x0, . . . , Xn = xn} = Pπ{X0 = xn, X1 = xn−1, . . . , Xn = x0}. (1.32)
Previous Page Next Page