10 I. Theory of Distributions
Definition 3.2. For any f D and any a(x)
C∞,
af is the distribution
defined by the formula:
(3.5) af(ϕ) = f(aϕ), ∀ϕ D.
Note that the right hand side of (3.5) defines a linear continuous func-
tional on D. Indeed, D, supp(aϕ) supp ϕ, and the Leibniz formula
gives that ϕm ϕ in D implies that aϕm in D.
Example 3.2. x1δ = 0. Indeed, x1δ(ϕ) = δ(x1ϕ) = 0ϕ(0) = 0 for ∀ϕ D.
3.3. Change of variables for distributions.
Let x = s(y) be a one-to-one
C∞
map of
Rn
onto
Rn:
(3.6) xk = sk(y1,...,yn), 1 k n.
We assume that the inverse map y =
s−1(x)
is also
C∞.
Denote by J(x) = det
∂s−1(x)
i
∂xj
n
i,j=1
the Jacobian of the inverse map
y =
s−1(x).
For a regular functional, after changing the variable y =
s−1(x)
we get:
(3.7)
Rn
f(s(y))ϕ(y)dy =
Rn
f(x)ϕ(s−1(x))|J(x)|dx.
Let (f s)(y) = f(s(y)). Then we can rewrite (3.7) in the following form:
(3.8) f s (ϕ) =
f(ϕ(s−1(x))|J(x)|).
Note that if ϕ D, then ψ(x) =
ϕ(s−1(x))J(x)
is also in D. Moreover, if
ϕn ϕ in D, then ψn ψ in D. Therefore the right hand side of (3.8) is
a linear continuous functional on D.
Hence for any f D we can define the change of variable x = s(y) in f
by formula (3.8).
Example 3.3. If x1 = s(y1) = ky1 + b is a linear map in
R1,
we get
f s (ϕ) = f ϕ
x1 b
k
1
|k|
.
4. Convergence of distributions
Definition 4.1. We say that a sequence of distributions fn D converges
to a distribution f if
(4.1) fn(ϕ) f(ϕ) for any ϕ D,
i.e., fn f in D if (4.1) holds.
Previous Page Next Page