CHAPTER 1

The result

We consider the problem of characterization of best polynomial approximation

on polytopes in Rd. To have a basis for discussion, first we briefly review the

one-dimensional case.

Let f be a continuous function on [−1, 1]. With ϕ(x) =

√

1 − x2 and r =

1, 2,... let

(1.1) ωϕ(f,

r

δ) = sup

0h≤δ, x∈[−1,1]

Δhϕ(x)f(x)

r

[−1,1]

be its so called ϕ-modulus of smoothness of order r, where

(1.2) Δhf(x)

r

=

r

k=0

(−1)k

r

k

f x + (

r

2

−k)h

is the r-th symmetric difference, and ·S denotes the supremum norm on a set S.

In (1.1) it is agreed that Δhf(x)

r

= 0 if [x −

r

2

h, x +

r

2

h] ⊆ [−1, 1]. Let

En(f)[−1,1] = inf

pn

f − pn

[−1,1]

be the error of best approximation of f by polynomials pn of degree at most n.

Then (see [12, Theorem 7.2.1]) for n ≥ r

(1.3) En(f)[−1,1] ≤ Mωϕ

r

f,

1

n

and (see [12, Theorem 7.2.4])

(1.4) ωϕ

r

f,

1

n

≤

M

nr

n

k=0

(k +

1)r−1Ek(f)[−1,1],

n = 1, 2,...,

where M depends only on r.

(1.3)–(1.4) constitute what is usually called a characterization of the rate of

best polynomial approximation in terms of moduli of smoothness, e.g. they give

En(f)[−1,1] =

O(n−α)

⇐⇒ ωϕ(f,

r

δ) =

O(δα)

for α r. This is precisely what we want to do for multidimensional polynomial

approximation in

Rd.

(1.3) is usually called the direct, or Jackson-type, while (1.4)

is the converse, or Stechkin-type estimate. This latter (1.4) is a weak converse

to (1.3), but that is natural, since En(f) can tend to zero arbitrarily fast, but

ωϕ(f, r 1/n) ≥ c/nr unless f is a polynomial of degree at most r − 1.

In

Rd

we call a closed set K ⊂

Rd

a convex polytope if it is the convex hull

of finitely many points. K is d-dimensional if it has an inner point, which we shall

always assume. The analogue of the ϕ-modulus of smoothness on K was defined in

[12, Chapter 12], and to recall its definition we need to consider the function along

lines in different directions. A direction e in

Rd

is just a unit vector e ∈

Rd.

Clearly,

3