2.3. Lagrange’s approach 33 Let us suppose that the function y = ˆ(x) solves our problem. We now introduce h(x), a small deviation or variation from this ide- alized solution, y(x) = ˆ(x) + h(x) (2.19) (see Figure 2.2), that satisfies h(a) = 0 and h(b) = 0 . (2.20) At this point, we need to discuss a subtle point that escaped Lagrange but that turns out to be rather important. What exactly do we mean when we say that a variation is small? The usual way to measure the nearness of two functions is to compute the norm of the difference of the two functions. There are many possible norms and we will see that our conclusions about extrema (maxima and minima) are rather sensitive to which norm we use. We will use two different norms throughout this course. They are the weak norm ||h|| w = max [a,b] |h(x)| (2.21) and the strong norm ||h|| s = max [a, b] |h(x)| + sup [a,b] |h (x) | . (2.22) x y a b } ε ˆ(x ) y (x ) = ˆ(x ) + h(x ) Figure 2.3. Strong variation
Purchased from American Mathematical Society for the exclusive use of nofirst nolast (email unknown) Copyright 2014 American Mathematical Society. Duplication prohibited. Please report unauthorized use to cust-serv@ams.org. Thank You! Your purchase supports the AMS' mission, programs, and services for the mathematical community.