10 1. INTRODUCTION TO FINITE MARKOV CHAINS 1 2 3 4 5 Figure 1.4. An example of a graph with vertex set {1, 2, 3, 4, 5} and 6 edges. Remark 1.11. We have chosen a narrow definition of “graph” for simplicity. It is sometimes useful to allow edges connecting a vertex to itself, called loops. It is also sometimes useful to allow multiple edges connecting a single pair of vertices. Loops and multiple edges both contribute to the degree of a vertex and are counted as options when a simple random walk chooses a direction. See Section 6.5.1 for an example. We will have much more to say about random walks on graphs throughout this book—but especially in Chapter 9. 1.5. Stationary Distributions 1.5.1. Definition. We saw in Example 1.1 that a distribution π on satis- fying π = πP (1.14) can have another interesting property: in that case, π was the long-term limiting distribution of the chain. We call a probability π satisfying (1.14) a stationary distribution of the Markov chain. Clearly, if π is a stationary distribution and µ0 = π (i.e. the chain is started in a stationary distribution), then µt = π for all t 0. Note that we can also write (1.14) elementwise. An equivalent formulation is π(y) = x∈Ω π(x)P (x, y) for all y Ω. (1.15) Example 1.12. Consider simple random walk on a graph G = (V, E). For any vertex y V , x∈V deg(x)P (x, y) = x∼y deg(x) deg(x) = deg(y). (1.16) To get a probability, we simply normalize by y∈V deg(y) = 2|E| (a fact the reader should check). We conclude that the probability measure π(y) = deg(y) 2|E| for all y Ω, which is proportional to the degrees, is always a stationary distribution for the walk. For the graph in Figure 1.4, π = ( 2 12 , 3 12 , 4 12 , 2 12 , 1 12 ) .
Previous Page Next Page