1.3. The Cram´ er-Rao Lower Bound 7
The Fisher information for a statistical experiment of size n is the vari-
ance of the total Fisher score function,
In(θ) = Varθ Ln(θ) = Eθ
∂ ln p (X1,...,Xn, θ)
dx1 . . . dxn .
Lemma 1.7. For independent observations, the Fisher information is ad-
ditive. In particular, for any θ ∈ Θ , the equation holds In(θ) = n I(θ).
Proof. As the variance of the sum of n independent random variables,
In(θ) = Varθ Ln(θ) = Varθ l (X1 , θ) + . . . + l (Xn , θ)
= n Varθ l (X1 , θ) = n I(θ).
In view of this lemma, we use the following definition of the Fisher
information for a random sample of size n:
In(θ) = n Eθ
∂ ln p (X, θ)
Another way of computing the Fisher information is presented in Exercise
1.3. The Cram´ er-Rao Lower Bound
A statistical experiment is called regular if its Fisher information is con-
tinuous, strictly positive, and bounded for all θ ∈ Θ . Next we present an
inequality for the variance of any estimator of θ in a regular experiment.
This inequality is termed the Cram´ er-Rao inequality, and the lower bound
is known as the Cram´ er-Rao lower bound.
Theorem 1.8. Consider an estimator
θ n(X1,...,Xn) of the parame-
ter θ in a regular experiment. Suppose its bias bn(θ) = Eθ
θ − θ is con-
tinuously differentiable. Let bn (θ) denote the derivative of the bias. Then
the variance of
satisfies the inequality
1 + bn(θ)
, θ ∈ Θ.
Proof. By the definition of the bias, we have that
θ + bn(θ) = Eθ
θ p (x1,...,xn,θ) dx1 . . . dxn.