1.3. The Cram´ er-Rao Lower Bound 7
The Fisher information for a statistical experiment of size n is the vari-
ance of the total Fisher score function,
In(θ) = Varθ Ln(θ) =
(
Ln(θ)
)
2
=
ln p (X1,...,Xn, θ)
θ
2
=
Rn
(∂ p
(x1,...,xn,θ)/∂θ)2
p (x1,...,xn,θ)
dx1 . . . dxn .
Lemma 1.7. For independent observations, the Fisher information is ad-
ditive. In particular, for any θ Θ , the equation holds In(θ) = n I(θ).
Proof. As the variance of the sum of n independent random variables,
In(θ) = Varθ Ln(θ) = Varθ l (X1 , θ) + . . . + l (Xn , θ)
= n Varθ l (X1 , θ) = n I(θ).
In view of this lemma, we use the following definition of the Fisher
information for a random sample of size n:
In(θ) = n
ln p (X, θ)
θ
2
.
Another way of computing the Fisher information is presented in Exercise
1.1.
1.3. The Cram´ er-Rao Lower Bound
A statistical experiment is called regular if its Fisher information is con-
tinuous, strictly positive, and bounded for all θ Θ . Next we present an
inequality for the variance of any estimator of θ in a regular experiment.
This inequality is termed the Cram´ er-Rao inequality, and the lower bound
is known as the Cram´ er-Rao lower bound.
Theorem 1.8. Consider an estimator
ˆ
θ
n
=
ˆ
θ n(X1,...,Xn) of the parame-
ter θ in a regular experiment. Suppose its bias bn(θ) =
ˆn
θ θ is con-
tinuously differentiable. Let bn (θ) denote the derivative of the bias. Then
the variance of
ˆ
θ
n
satisfies the inequality
(1.1) Varθ
ˆn
θ
(
1 + bn(θ)
)2
In(θ)
, θ Θ.
Proof. By the definition of the bias, we have that
θ + bn(θ) =
ˆn
θ =
Rn
ˆn(x1,...,xn)
θ p (x1,...,xn,θ) dx1 . . . dxn.
Previous Page Next Page