1.1. A review of probability theory 9
(v) Given two random variables X1 and X2 taking values in R1,R2,
respectively, one can form the joint random variable (X1,X2) with
range R1×R2 with the product σ-algebra, by setting (X1,X2)(ω) :=
(X1(ω),X2(ω)) for every ω Ω. One easily verifies that this
is indeed a random variable, and that the operation of taking a
joint random variable is a probabilistic operation. This variable
can also be defined without reference to the sample space as the
unique random variable for which one has π1(X1,X2) = X1 and
π2(X1,X2) = X2, where π1 : (x1,x2) x1 and π2 : (x1,x2) x2
are the usual projection maps from R1 ×R2 to R1,R2, respectively.
One can similarly define the joint random variable (Xα)α∈A for any
family of random variables in various ranges Rα. Note here that
the set A of labels can be infinite or even uncountable, though of
course one needs to endow infinite product spaces
α∈A
with
the product σ-algebra to retain measurability.
(vi) Combining the previous two constructions, given any measurable
binary operation f : R1 × R2 R and random variables X1,X2
taking values in R1,R2, respectively, one can form the R -valued
random variable f(X1,X2) := f((X1,X2)), and this is a probabilis-
tic operation. Thus, for instance, one can add or multiply together
scalar random variables, and similarly for the matrix-valued ran-
dom variables that we will consider shortly. Similarly for ternary
and higher order operations. A technical issue: if one wants to per-
form an operation (such as division of two scalar random variables)
which is not defined everywhere (e.g., division when the denomina-
tor is zero). In such cases, one has to adjoin an additional “unde-
fined” symbol to the output range R . In practice, this will not
be a problem as long as all random variables concerned are defined
(i.e., avoid ⊥) almost surely.
(vii) Vector-valued random variables, which take values in a finite-dimen-
sional vector space such as
Rn
or
Cn
with the Borel σ-algebra. One
can view a vector-valued random variable X = (X1,...,Xn) as the
joint random variable of its scalar component random variables
X1,...,Xn. (Here we are using the basic fact from measure theory
that the Borel σ-algebra on
Rn
is the product σ-algebra of the
individual Borel σ-algebras on R.)
(viii) Matrix-valued random variables or random matrices, which take
values in a space Mn×p(R) or Mn×p(C) of n × p real or complex-
valued matrices, again with the Borel σ-algebra, where n, p 1 are
integers (usually we will focus on the square case n = p). Note
here that the shape n × p of the matrix is deterministic; we will
Previous Page Next Page