1.1. A review of probability theory 23

Exercise 1.1.20 (Creation of new, independent random variables). Let

(Xα)α∈A be a family of random variables (not necessarily independent or

finite), and let (μβ)β∈B be a finite collection of probability measures μβ on

measurable spaces Rβ. Then, after extending the sample space if necessary,

one can find a family (Yβ)β∈B of independent random variables, such that

each Yβ has distribution μβ, and the two families (Xα)α∈A and (Yβ)β∈B are

independent of each other.

Remark 1.1.15. It is possible to extend this exercise to the case when B

is infinite using the Kolmogorov extension theorem, which can be found in

any graduate probability text (see e.g. [Ka2002]). There is, however, the

caveat that some (mild) topological hypotheses now need to be imposed on

the range Rβ of the variables Yβ. For instance, it is enough to assume that

each Rβ is a locally compact σ-compact metric space equipped with the

Borel σ-algebra. These technicalities will, however, not be the focus of this

course, and we shall gloss over them in the rest of the text.

We isolate the important case when μβ = μ is independent of β. We say

that a family (Xα)α∈A of random variables is independently and identically

distributed, or iid for short, if they are jointly independent and all the Xα

have the same distribution.

Corollary 1.1.16. Let (Xα)α∈A be a family of random variables (not neces-

sarily independent or finite), let μ be a probability measure on a measurable

space R, and let B be an arbitrary set. Then, after extending the sample

space if necessary, one can find an iid family (Yβ)β∈B with distribution μ

which is independent of (Xα)α∈A.

Thus, for instance, one can create arbitrarily large iid families of Bernoulli

random variables, Gaussian random variables, etc., regardless of what other

random variables are already in play. We thus see that the freedom to ex-

tend the underlying sample space allows us access to an unlimited source

of randomness. This is in contrast to a situation studied in complexity the-

ory and computer science, in which one does not assume that the sample

space can be extended at will, and the amount of randomness one can use

is therefore limited.

Remark 1.1.17. Given two probability measures μX,μY on two measur-

able spaces RX , RY , a joining or coupling of these measures is a random

variable (X, Y ) taking values in the product space RX × RY , whose indi-

vidual components X, Y have distribution μX,μY , respectively. Exercise

1.1.20 shows that one can always couple two distributions together in an in-

dependent manner; but one can certainly create non-independent couplings

as well. The study of couplings (or joinings) is particularly important in

ergodic theory, but this will not be the focus of this text.