Corollary 4.8 In the situation of Theorem 4.7 de¬ne — to be the closure of

∪x∈ B(x, 2C2 h 0 ). Then there exists a constant c > 0 that can be computed explicitly such

that for all f ∈ C m+1 ( — ) and all quasi-uniform X ⊆ with h X, ¤ h 0 the approximation

error is bounded as follows:

f ’ s f,X ¤ ch m+1 | f |C m+1 ( .

—)

L∞( ) X,

We end this section with an investigation of the computational complexity of the moving

least squares method and with remarks on its practicability. We restrict our analysis to the

situation of quasi-uniform data sites.

4.3 Generalizations 43

From Corollary 4.4 we can read off that the computational complexity of the approximant

at a single point x is bounded by O(Q 3 + Q 2 #I (x) + Q#I (x)), if Q denotes the dimension

of πm (Rd ) and if the time necessary to evaluate a polynomial is considered to be constant. We

use the classical Landau symbol O to suppress unimportant constants. We know, however,

that #I (x) is bounded by a constant independent of x. Thus the complexity for a single

evaluation is constant if the set of indices I (x) is known in advance. But this can be done

in a preprocessing step that takes O(N ) time and is based on a simple boxing strategy. To

be more precise, it is not reasonable to collect the relevant data sites (i.e. I (x)) for each

evaluation point separately. This would lead to an O(N M) complexity, if M denotes the

number of evaluations. Instead, it is better to divide the domain into boxes of side length

h and collect for each box the data sites that it contains using a loop over all points. This

can obviously be done in linear time. After that we can ¬nd for every x ∈ the relevant

boxes in constant time, so that the overall complexity for M evaluations of the approximant

based on N points is O(N + M). A more thorough analysis of data structures for points in

d-dimensional space is made in Chapter 14.

Finally, it is not reasonable to use a ¬xed polynomial basis such as a set of monomials

for every evaluation point x ∈ . Instead, a local monomial basis centered at the evaluation

point x and scaled by the support radius δ leads to a more stable method.

4.3 Generalizations

We now discuss two different matters. One deals with the assumption of quasi-uniformity;

the other carries the initial problem over to a more general setting.

The assumption that the data sites are quasi-uniform is often violated, for example, if

the data are clustered or if there are more points in regions where it is supposed that the

unknown function has a dif¬cult behavior. In such a situation our analysis fails. The reason

for this is that we have chosen the same support radius δ everywhere. In our particular

situation we have two possible ways to resolve this problem. On the one hand we could

assign a different support radius to each basis function a — . Even if in general this is a good

j

idea, it would cause problems in the case of moving least squares. The better choice is to let

δ depend on the current evaluation point. Hence, if x lies in a region with a high data density,

δ(x) would be chosen small. If there are only few points around x, one has to choose δ(x)

rather large. Of course δ, now considered as a function of x, should vary smoothly in x.

Sometimes one faces a more general approximation problem than the one discussed so

far. For example, if partial differential equations have to be solved numerically, information

is given not on the function itself but on its derivatives. In the case of moving least squares,

most of what was done in the point-evaluation case holds also in a more general setting.

To describe the general problem, we will assume that we are working in a function

space F ⊆ C( ). The information we have is represented by a ¬nite number of continuous

and linear functionals »1 , . . . , » N ∈ F — and the information we seek is represented by a

functional » ∈ F — . Hence, we are given »1 ( f ), . . . , » N ( f ) and we want to use these data to

get a good approximation »( f ) to »( f ). The idea of moving least squares or, more generally

44 Moving least squares

of local polynomial reproductions requires the reproduction process to be exact on a ¬nite-

dimensional subspace P of F. This has been a set of polynomials so far but we can also

consider other function spaces.

In the spirit of moving least squares we choose a nonnegative weight function w, this

time de¬ned on F — — F — , which measures the correlation of two functionals, and we de¬ne

»( f ) to be »( p — ), where p — ∈ P is the solution of the minimization problem

N

[»i ( f ) ’ »i ( p)]2 w(», »i ) : p ∈ P .

min (4.9)

i=1

This new problem reduces to the old one by setting » j = δx j , 1 ¤ j ¤ N , » = δx , and

w(δx j , δx ) = δ (x ’ x j ).

A closer look at what we did to prove existence and uniqueness shows that Theorem

4.3 remains true even in this more general situation. We only have to adapt the notion of

unisolvent sets. But it should be clear that = {»1 , . . . , » N } is P-unisolvent if and only

if » j ( p) = 0, 1 ¤ j ¤ N , implies p = 0. Moreover, we have to replace the set of indices

I (x) by I (») = { j : w(» j , ») > 0}.

Theorem 4.9 Suppose that = {»1 , . . . , » N } is P-unisolvent. In this situation, problem

(4.9) is uniquely solvable and the solution »( f ) = »( p — ) can be represented as

N

ai— »i ( f ),

»( f ) =

i∈I (»)

where the coef¬cients ai are determined by minimizing the quadratic form

1

ai2 (4.10)

w(», »i )

i∈I (»)

under the constraints

ai »i ( p) = »( p), p ∈ P. (4.11)

i∈I (»)

It is crucial that = {»1 , . . . , » N } is P-unisolvent, otherwise there will not be a unique

solution. Hence, it is in general not possible to recover f from its derivatives in the interior

of a domain and its function values at the boundary of as one would do in classical

collocation methods for the numerical solution of partial differential equations. The infor-

mation on the boundary gets lost in the interior because of the compact support of the weight

function. But the compact support is necessary for an ef¬cient evaluation.

4.4 Notes and comments

Approximation by moving least squares has its origin in the early paper [101] by Lancaster

and Salkauskas from 1981 with special cases going back to McLain [120, 121] in 1974 and

4.4 Notes and comments 45

1976 and to Shepard [177] in 1968. Other early papers are those by Farwig [53“55], who

mainly concentrated on investigating the approximation order. It is interesting to see that

Farwig remarked in [53] that the “least squares problem . . . varies in x” and hence “comput-

ing the global approximant . . . is generally very time-consuming”. With the development of

fast computers and ef¬cient data structures for the data sites, this statement no longer holds,

and the moving least squares approximation has in recent times attracted attention again,

in the present context and also for the numerical solution of partial differential equations;

see Belytschko et al. [23].

The equivalence of the two minimization problems given in Theorem 4.3 was ¬rst pointed

out by Levin [104, 105]. The error estimates given here are based upon the present author™s

paper [196].

5

Auxiliary tools from analysis and measure theory

For our investigations on radial basis functions in the following chapters it is crucial to

collect several results from different branches of mathematics. Hence, this chapter is a

repository of such results. In particular, we are concerned with special functions such as

Bessel functions and the -function. We discuss the features of Fourier transforms and give

an introduction to the aspects of measure theory that are relevant for our purposes. The

reader who is not interested in these technical matters could skip this chapter and come

back to it whenever necessary. Nonetheless, it is strongly recommended that one should at

least have a look at the de¬nition of Fourier transforms to become familiar with the notation

we use. Because of the diversity of results presented here, we cannot give proofs in every

case.

5.1 Bessel functions

Bessel functions will play an important role in what follows. Most of what we discuss here

can be found in the fundamental book [187] by Watson.

The starting point for introducing Bessel functions is to remind the reader of the classical

-function and some of its features.

De¬nition 5.1 The -function is de¬ned by

n! n z

(z) := lim

z(z + 1) . . . (z + n)