1.2. RANDOM MAPPING REPRESENTATION 7 Rather than writing down the transition matrix in (1.11), this chain can be specified simply in words: at each step, a coin is tossed. If the coin lands heads up, the walk moves one step clockwise. If the coin lands tails up, the walk moves one step counterclockwise. More precisely, suppose that Z is a random variable which is equally likely to take on the values −1 and +1. If the current state of the chain is j ∈ Zn, then the next state is j + Z mod n. For any k ∈ Zn, P{(j + Z) mod n = k} = P (j, k). In other words, the distribution of (j + Z) mod n equals P (j, ·). A random mapping representation of a transition matrix P on state space Ω is a function f : Ω × Λ → Ω, along with a Λ-valued random variable Z, satisfying P{f(x, Z) = y} = P (x, y). The reader should check that if Z1, Z2, . . . is a sequence of independent random variables, each having the same distribution as Z, and X0 has distribution µ, then the sequence (X0, X1, . . . ) defined by Xn = f(Xn−1, Zn) for n ≥ 1 is a Markov chain with transition matrix P and initial distribution µ. For the example of the simple random walk on the cycle, setting Λ = {1,−1}, each Zi uniform on Λ, and f(x, z) = x + z mod n yields a random mapping repre- sentation. Proposition 1.5. Every transition matrix on a finite state space has a random mapping representation. Proof. Let P be the transition matrix of a Markov chain with state space Ω = {x1, . . . , xn}. Take Λ = [0, 1] our auxiliary random variables Z, Z1, Z2, . . . will be uniformly chosen in this interval. Set Fj,k = ∑k i=1 P (xj , xi) and define f(xj , z) := xk when Fj,k−1 z ≤ Fj,k. We have P{f(xj, Z) = xk} = P{Fj,k−1 Z ≤ Fj,k} = P (xj , xk). Note that, unlike transition matrices, random mapping representations are far from unique. For instance, replacing the function f(x, z) in the proof of Proposition 1.5 with f(x, 1 − z) yields a different representation of the same transition matrix. Random mapping representations are crucial for simulating large chains. They can also be the most convenient way to describe a chain. We will often give rules for how a chain proceeds from state to state, using some extra randomness to determine where to go next such discussions are implicit random mapping representations. Finally, random mapping representations provide a way to coordinate two (or more) chain trajectories, as we can simply use the same sequence of auxiliary random variables to determine updates. This technique will be exploited in Chapter 5, on coupling Markov chain trajectories, and elsewhere.

Purchased from American Mathematical Society for the exclusive use of nofirst nolast (email unknown) Copyright 2009 American Mathematical Society. Duplication prohibited. Please report unauthorized use to cust-serv@ams.org. Thank You! Your purchase supports the AMS' mission, programs, and services for the mathematical community.