By setting u j (x) = 0 for x j ∈ X \Y , we have (1) and (2). For (3), we realize that x ’ x j 2

> C2 h means that x j is not in the cone C(x), and thus u j (x) = 0 by construction.

Note that all constants can be improved if better estimates on the cone condition of a

cone are used. Moreover, if the region is a ball then it is better to use the results on

balls, i.e. instead of using small cones to construct the functions u j one should use small

balls to derive better constants. This philosophy holds for other regions as well. The more

information is known about a region the better are the constants.

Finally, we want to point out that

N

|u j (x)| ¤ 2

j=1

means that the Lebesgue functions of the associated quasi-interpolants are uniformly

bounded by 2. The reader should compare this with the known results on polynomial

interpolation. The price we have to pay for this uniform bound is oversampling: we use

signi¬cantly more points than dim(πm (Rd )).

3.4 Notes and comments

The idea of using local polynomial reproductions in approximation theory is quite an old

one. One could say that even splines are based on this background.

Norming sets have been introduced in the context of interpolation by positive de¬nite

functions on the sphere; see Jetter et al. [89]. With this approach it was for the ¬rst time

possible to control the constants involved in the process of bounding certain Lebesgue func-

tions. Since then, norming sets have been used for example to derive results on quadrature

rules on the sphere (Mhaskar et al. [131,132]), approximation with interpolatory constraints

(Mhaskar et al. [130]), the moving least squares approximation (Wendland [196]), and the

determination of explicit constants (Narcowich et al. [148]). The presentation given here is

mainly motivated by Mhaskar et al. [132]. In the next chapter we will have a closer look at

an application in the context of the moving least squares approximation.

4

Moving least squares

The crucial point in local polynomial reproduction is the compact support of the basis

functions u j . To be more precise, all supports have to be of the same size. The local support

of the u j means that data points far away from the current point of interest x have no

in¬‚uence on the function value at x. This is often a reasonable assumption.

The last chapter did not answer the question how to construct families with local poly-

nomial reproductions ef¬ciently. The moving least squares method provided in this chapter

forms an example of this.

4.1 De¬nition and characterization

Suppose again that discrete values of a function f are given at certain data sites X =

{x1 , . . . , x N } ⊆ ⊆ Rd . Throughout this chapter is supposed to satisfy an interior cone

condition with angle θ and radius r .

The idea of the moving least squares approximation is to solve for every point x a locally

weighted least squares problem. This appears to be quite expensive at ¬rst sight, but it will

turn out to be a very ef¬cient method. Moreover, in many applications one is only interested

in a few evaluations. For such applications the moving least squares approximation is even

more attractive, because it is not necessary to set up and solve a large system.

The in¬‚uence of the data points is governed by a weight function w : — ’ R, which

becomes smaller the further away its arguments are from each other. Ideally, w vanishes for

arguments x, y ∈ with x ’ y 2 greater than a certain threshold. Such a behavior can

be modeled by using a translation-invariant weight function. This means that w is of the

form w(x, y) = δ (x ’ y), δ = (·/δ) being the scaled version of a compactly supported

function : Rd ’ R.

De¬nition 4.1 For x ∈ , the value s f,X (x) of the moving least squares approximant is

given by s f,X (x) = p — (x) where p — is the solution of

N

[ f (xi ) ’ p(xi )]2 w(x, xi ) : p ∈ πm (Rd ) .

min (4.1)

i=1

35

36 Moving least squares

In what follows we will assume that w(x, y) = δ (x ’ y) uses a nonnegative function

, with support in the unit ball B(0, 1), which is positive on the ball B(0, 1/2) with radius

1/2. The latter will be important for our error analysis. In many applications it might

be convenient to assume to be radial as well, meaning that (x) = φ( x 2 ), x ∈ Rd ,

with a univariate and nonnegative function φ : [0, ∞) ’ R that is positive on [0, 1/2] and

supported in [0, 1]. But this is not essential for what we have in mind.

In any case we can reformulate (4.1) more precisely as

[ f (xi ) ’ p(xi )]2 ’ xi ) : p ∈ πm (Rd ) ,

min δ (x (4.2)

i∈I (x)

using the set of indices

I (x) ≡ I (x, δ, X ) := { j ∈ {1, . . . , N } : x ’ x j < δ}.

2

So far it is not clear at all why moving least squares provides local polynomial reproduc-

tion. It is quite instructive to have a look at the simplest case, namely m = 0. In this situation

only constants are reproduced and the minimization problem has the explicit solution

’ xj)

δ (x

s f,X (x) = .

f (x j ) (4.3)

δ (x ’ x i )

i∈I (x)

j∈I (x)

a j (x):=

Such an approximant is sometimes also called the Shepard approximant (Shepard [177]).

Obviously, it reproduces constants. The basis functions a j have a support of radius δ, so that

δ > C2 h X, is a good choice. Finally, one can immediately see that j |a j | = 1. Hence

we have indeed a local polynomial reproduction for polynomials of degree zero. Moreover,

the smoothness of the approximant is ruled by the smoothness of , provided that the

denominator is always nonzero. The latter is always true for a suf¬ciently large δ > 0.

Let us now come back to the case of a general m. We start with a reformulation of the

initial minimization problem. To this end we need a result on quadratic optimization.

Lemma 4.2 Let a ∈ R, b ∈ Rn , A ∈ Rn—n , and P ∈ Rn—m be given. For v ∈ Rm de¬ne

Mv := {x ∈ Rn : P T x = v}. Suppose that A = A T is positive de¬nite on M0 . If Mv is not

empty then the quadratic function

f (x) := a + b T x + x T Ax

has a unique minimum on Mv .

Proof Obviously, the set Mv is convex and closed. Hence, if f is strictly convex on Mv then

we immediately have uniqueness. Since A is positive de¬nite on the subspace M0 , there

exists a »min > 0 such that x T Ax ≥ »min x 2 for all x ∈ M0 . Then, an easy calculation

2

gives, for x, y ∈ Mv and » ∈ [0, 1],

(1 ’ ») f (x) + » f (y) ’ f ((1 ’ »)x + »y) = »(1 ’ »)(x ’ y)T A(x ’ y)

≥ »(1 ’ »)»min x ’ y 2 ,

2

4.1 De¬nition and characterization 37

showing that f is indeed strictly convex on Mv . To prove existence, we use a method

standard in this context. Since Mv is not empty we can ¬x an x0 ∈ Mv and restrict the

minimization problem to Mv := {x ∈ Mv : f (x) ¤ f (x0 )}. Since f is continuous, Mv is

also closed. Thus it remains to show that it is also bounded. Every x ∈ Mv can be represented

by x = x0 + z with z ∈ M0 . Hence

0 ≥ f (x) ’ f (x0 )

= (b + 2Ax0 )T (x ’ x0 ) + (x ’ x0 )T A(x ’ x0 )

≥ »min x ’ x0 ’ b + 2Ax0 x ’ x0

2

2 2

2

shows that x ’ x0 2 , and therefore also x 2 , is uniformly bounded.

We have formulated Lemma 4.2 more generally than is necessary for our purposes here,

but we will need this more general version in a later chapter. Here, we consider only

minimization problems where A is positive de¬nite on all of Rd . Note that setting P = 0

and v = 0 gives the global optimization problem on Mv = Rd .

Theorem 4.3 Suppose that for every x ∈ the set {x j : j ∈ I (x)} is πm (Rd )-unisolvent.

In this situation, problem (4.2) is uniquely solvable and the solution s f,X (x) = p — (x) can

be represented as