From this we can deduce that D ± •n L 2 ( ) ’ 0 for n ’ ∞ provided that |±| = .

Since satis¬es an interior cone condition, H ( ) must be compactly embedded in

H ( ). This means that {•n } is relatively compact in H ’1 ( ). Hence there exists a

’1

convergent subsequence. For simplicity we call this subsequence {•n } again, i.e. •n ’ •

in H ’1 ( ) with • ∈ H ’1 ( ). But since D ± •n L 2 ( ) ’ 0 if |±| = , {•n } is a Cauchy

sequence, even in H ( ), that converges to an element, say • ∈ H ( ). Since it also

converges in H ’1 ( ) to • we must have • = • ∈ H ( ) and •n ’ • in H ( ). Moreover,

we can conclude that D ± • = 0 for all |±| = . As in the proof of Lemma 10.38 we see

that • coincides on with a polynomial from π ’1 (Rd ). By Sobolev™s embedding theorem

again, we ¬nd that •n (x) ’ •(x), x ∈ . This means in conjunction with (11.21) that

|•(ξ j )|2 = 0 and thus •(ξ j ) = 0 for all 1 ¤ j ¤ Q. Since is π ’1 (Rd )-unisolvent, we

can conclude that • = 0, which contradicts • H ( ) = limn’∞ •n H ( ) = 1.

Theorem 11.36 Let > m + d/2. Suppose that ⊆ Rd is open and bounded and satis¬es

an interior cone condition. Consider the thin-plate splines d, as conditionally positive

de¬nite of order . Then the error between f ∈ H ( ) and its interpolant s f,X can be

bounded by

’m’d(1/2’1/ p)+

| f ’ s f,X |W p ( ¤ Ch X, | f |BL (

m ) )

for 1 ¤ p ¤ ∞. Finally, if f ∈ H 2 ( ) has compact support in , then we have the im-

proved bound

2 ’d/2

| f (x) ’ s f,X (x)| ¤ Ch X, L2( ), x∈ .

f

Proof The ¬rst estimate is obviously true in the case p = ∞. Hence we can assume that

1 ¤ p < ∞ for the ¬rst two estimates.

According to Lemma 11.35 we can extend f ∈ H ( ) to f ∈ BL (Rd ). Then the

interpolant to f based on X ⊆ coincides with the interpolant to f . The density result

204 Error estimates for radial basis function interpolation

in Theorem 10.40 allows us to apply Theorem 11.32 in this situation also, yielding

’m’d(1/2’1/ p)+

| f ’ s f,X |W p ( ¤ Ch X, |f ’ sf ,X |BL ( ) .

m )

Moreover, by Corollary 10.25 we have | f ’ s f ,X |BL ( ) ¤ | f |BL ( ) and, following

Lemma 11.35, the latter may be bounded by K | f |BL ( ) .

Finally, if f ∈ H 2 ( ) has compact support in then we obviously have f = f . Hence,

the error estimate just established gives for p = 2

| f ’ s f,X | L 2 (Rd ) ¤ Ch X, | f ’ s f,X |BL (Rd ) .

Using this in the proof of Theorem 11.24 yields the ¬nal error bound.

Note that the condition for f ∈ H 2 ( ) to have compact support in can be weakened

by assuming that certain normal derivatives vanish.

Thin-plate splines are probably the most examined and best understood basis functions.

Nonetheless, there are still some important open problems. More details about this are given,

among other things, in the next section.

11.7 Notes and comments

Nowadays there is common agreement upon the fact that error estimates of the interpolation

process using radial basis functions should ¬rst of all take place in the native space. The

material presented in the ¬rst three sections of this chapter is based upon the pioneering

work of Duchon [49], Madych and Nelson [111“113], and Wu and Schaback [204]. But

the number of publications in this particular ¬eld is steadily increasing and the interested

reader should have a look at the bibliography. Theorem 11.13 borrows ideas from Levesley

and Ragozin [103].

Let us point out that the seminal paper [114] by Madych and Nelson is so far the

only one (except the somewhat weaker version [195] by the present author, which is

based on the same ideas) that establishes spectral convergence orders for Gaussians and

(inverse) multiquadrics. But we also want to emphasize that this paper does not prove a

spectral order of the form e’c/ h for the Gaussians, as is sometimes suggested in other

2

publications.

The experienced reader has probably noticed that the improved estimates derived in

Section 11.5 borrow ideas from classical spline theory. As in that case, rather simple Hilbert

space arguments are used in Theorem 11.23 to double the approximation order; see also

Schaback [166, 168]. The estimates on the L p -norm use a trick that has become known

as Duchon™s localization trick [49], where they appear in the context of thin-plate spline

approximation for the ¬rst time. The general version given here comes from Narcowich

et al. [149]. However, in the case of spline approximation it is well known that the optimal

order cannot be achieved using only Hilbert space arguments; this should also be true

in the case of radial basis function approximation. Moreover, there is quite a gap in the

approximation orders that can be realized if the data sites form a regular grid or if they are

11.7 Notes and comments 205

truly scattered and ¬nitely many. Let us discuss this in more detail in the case of thin-plate

splines. Let γ p = min{ , ’ d/2 + d/ p} be the L p -approximation order as derived in the

¬rst case mentioned in Theorem 11.36. It is known (see Buhmann [33]) that for suf¬ciently

smooth functions the L ∞ -order is 2 , if the data sites form a regular in¬nite grid of grid

size h. The same is true if the data sites are a ¬nite grid on [0, 1]d and the error is measured

in any compact subset of (0, 1)d (see Bejancu [22]). However, it is also known that if the

L ∞ -order is larger than 2 then the approximated function f becomes “trivial” in the sense

f = 0 on , so that the saturation order for these functions is 2

that it has to satisfy of

(see Schaback and Wendland [171]).

In the papers [90“94] Johnson showed that the L p -order for smooth functions does not

exceed + 1/ p for 1 ¤ p ¤ ∞ and he improved it to γ p + 1/ p for 1 ¤ p ¤ 2, so that most

is known about this case. He also showed that the L ∞ -order is 2 except for a boundary

layer of size O(h| log h|).

Some people still argue that, on the one hand, the native space, particularly for Gaussians

and (inverse) multiqadrics, is rather small since the Fourier transform of a function from

one of these spaces has to decay exponentially fast. This is beyond doubt true but, on the

other hand, Shannon™s famous sampling theorem holds in its original form for an even

smaller class of functions, namely only band-limited ones, and nobody would argue the

importance of this theorem. Nonetheless, there is some progress in escaping the native

space and extending the error estimates to larger function spaces. The ¬rst result in this

direction came from Schaback [165], where interpolation was replaced by approximation.

Yoon [207, 208] approached interpolation using Schaback™s ideas but had to work with

scaled functions. The most recent results are those of Brownlee and Light [32] for thin-

plate splines and Narcowich and Ward [144] for more general basis functions.

12

Stability

In this chapter we will be concerned with the stability of the radial basis function interpo-

lation process. Let us introduce the subject with an example.

We consider the following one-dimensional interpolation problem. The data sites are

given by x√ = j/n ∈ [0, 1], 0 ¤ j ¤ n, and the basis function is the inverse multiquadric

j

φ(r ) = 1/ 1 + r 2 . From the previous chapter we know that the sequence of interpolants

sn to a function from the native space of φ converges to f as e’cn and this convergence

comes from estimates on the power function.

To compute the interpolant, however, we have to invert the interpolation matrices

An = (φ((i ’ j)/n)). Unfortunately, the smallest eigenvalue »min (n) tends to zero as fast as

P 2 ,X . To illustrate this behavior we have plotted both ’ log »min (n), the negative logarithm

of the smallest eigenvalue of An , and ’ log P 2 ,X L ∞ [0,1] in Figure 12.1. For a more math-

ematical investigation let us make the following de¬nition. Assume that : — ’ R

is a conditionally positive de¬nite kernel with respect to P. For X = {x1 , . . . , x N } ⊆ and

a basis p1 , . . . , p Q of P we set P = PX = ( p j (xi )) ∈ R N —Q . This allows us to express the

side conditions (10.2) from De¬nition 10.14 by PX ± = 0. With this notation we de¬ne

T

± T A ,X ±

»min (A = ,

,X ) inf (12.1)

±T ±

±∈R N \{0},PX ±=0

T

where A ,X denotes the usual interpolation matrix. Since the quadratic form (10.3) is

positive on the set of all ± ∈ R N with P T ± = 0 we necessarily have »min (A ,X ) > 0.

Now why is »min (A ,X ) important for the stability of the interpolation process? Obviously,

if is an unconditionally positive de¬nite kernel then A ,X is a positive de¬nite matrix

and »min (A ,X ) is its smallest eigenvalue. But even in the case of a conditionally positive