Preface

Markov first studied the stochastic processes that came to be named after him

in 1906. Approximately a century later, there is an active and diverse interdisci-

plinary community of researchers using Markov chains in computer science, physics,

statistics, bioinformatics, engineering, and many other areas.

The classical theory of Markov chains studied fixed chains, and the goal was

to estimate the rate of convergence to stationarity of the distribution at time t, as

t → ∞. In the past two decades, as interest in chains with large state spaces has

increased, a different asymptotic analysis has emerged. Some target distance to

the stationary distribution is prescribed; the number of steps required to reach this

target is called the mixing time of the chain. Now, the goal is to understand how

the mixing time grows as the size of the state space increases.

The modern theory of Markov chain mixing is the result of the convergence, in

the 1980’s and 1990’s, of several threads. (We mention only a few names here; see

the chapter Notes for references.)

For statistical physicists Markov chains become useful in Monte Carlo simu-

lation, especially for models on finite grids. The mixing time can determine the

running time for simulation. However, Markov chains are used not only for sim-

ulation and sampling purposes, but also as models of dynamical processes. Deep

connections were found between rapid mixing and spatial properties of spin systems,

e.g., by Dobrushin, Shlosman, Stroock, Zegarlinski, Martinelli, and Olivieri.

In theoretical computer science, Markov chains play a key role in sampling and

approximate counting algorithms. Often the goal was to prove that the mixing

time is polynomial in the logarithm of the state space size. (In this book, we are

generally interested in more precise asymptotics.)

At the same time, mathematicians including Aldous and Diaconis were inten-

sively studying card shuffling and other random walks on groups. Both spectral

methods and probabilistic techniques, such as coupling, played important roles.

Alon and Milman, Jerrum and Sinclair, and Lawler and Sokal elucidated the con-

nection between eigenvalues and expansion properties. Ingenious constructions of

“expander” graphs (on which random walks mix especially fast) were found using

probability, representation theory, and number theory.

In the 1990’s there was substantial interaction between these communities, as

computer scientists studied spin systems and as ideas from physics were used for

sampling combinatorial structures. Using the geometry of the underlying graph to

find (or exclude) bottlenecks played a key role in many results.

There are many methods for determining the asymptotics of convergence to

stationarity as a function of the state space size and geometry. We hope to present

these exciting developments in an accessible way.

xi