1.1. A review of probability theory 11
We write X Y for μX = μY ; we also abuse notation slightly by writing
X μX.
We have seen that every random variable generates a probability distri-
bution μX . The converse is also true:
Lemma 1.1.7 (Creating a random variable with a specified distribution).
Let μ be a probability measure on a measurable space R = (R, R). Then
(after extending the sample space Ω if necessary) there exists an R-valued
random variable X with distribution μ.
Proof. Extend Ω to Ω × R by using the obvious projection map (ω, r) ω
from Ω × R back to Ω, and extending the probability measure P on Ω to
the product measure P × μ on Ω × R. The random variable X(ω, r) := r
then has distribution μ.
If X is a discrete random variable, μX is the discrete probability measure
(1.3) μX(S) =
x∈S
px
where px := P(X = x) are non-negative real numbers that add up to 1. To
put it another way, the distribution of a discrete random variable can be
expressed as the sum of Dirac masses (defined below):
(1.4) μX =
x∈R
pxδx.
We list some important examples of discrete distributions:
(i) Dirac distributions δx0 , in which px = 1 for x = x0 and px = 0
otherwise;
(ii) discrete uniform distributions, in which R is finite and px = 1/|R|
for all x R;
(iii) (unsigned) Bernoulli distributions, in which R = {0, 1}, p1 = p,
and p0 = 1 p for some parameter 0 p 1;
(iv) the signed Bernoulli distribution, in which R = {−1, +1} and p+1 =
p−1 = 1/2;
(v) lazy signed Bernoulli distributions, in which R = {−1, 0, +1}, p+1 =
p−1 = μ/2, and p0 = 1 μ for some parameter 0 μ 1;
(vi) geometric distributions, in which R = {0, 1, 2,...} and pk = (1
p)kp
for all natural numbers k and some parameter 0 p 1; and
(vii) Poisson distributions, in which R = {0, 1, 2,...} and pk =
λke−λ
k!
for all natural numbers k and some parameter λ.
Previous Page Next Page